with Spencer Greenberg
the podcast about ideas that matter

Episode 165: Virtual reality, simulation theory, consciousness, and identity (with David Chalmers)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

July 6, 2023

What does philosophy have to say about virtual reality (VR)? Under what conditions is "normal" reality preferable to VR? To what extent are VR experiences "real"? How likely is it that we're living in a simulation? What implications would the discovery that we're living in a simulation have for our beliefs about reality? How common is Bayesian thinking among philosophers? How should we think about identity over time if selves can be split or duplicated? What might it look like for our conception of identity to undergo a "fall from Eden"? What do people mean when they say that consciousness is an illusion? Finding a grand unified theory of physics seems at least in principle the sort of thing that science can do, even if we haven't done it yet; but can science even in principle solve the hard problem of consciousness? Might consciousness just be a fundamental law of the universe, an axiom which we must accept but for which there might be no explanation? Is consciousness needed in order to attain certain levels of biological evolution? How conscious (or not) are our current AI models? Statistically speaking, what are the most prevalent views held by philosophers?

David Chalmers is University Professor of Philosophy and Neural Science and co-director of the Center for Mind, Brain, and Consciousness at New York University. He is the author of The Conscious Mind (1996) and Reality+ (2022). He is known for formulating the "hard problem" of consciousness, which inspired Tom Stoppard's play The Hard Problem, and for the idea of the "extended mind," which says that the tools we use can become parts of our minds. Learn more about him at

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with David Chalmers about virtual reality, the simulation hypothesis, and consciousness and AI.

SPENCER: David, welcome.

DAVID: Thanks, Spencer, pleasure to be here.

SPENCER: You have a philosophical mind I've long admired, so I'm really excited to dig into a bunch of philosophy topics with you today. Let's get started with the topic of your new book, Reality Plus, about philosophy and virtual reality. Do you want to start us off by telling us about why virtual reality is relevant to philosophy?

DAVID: Well, I think philosophy, at least a whole lot of philosophy, is about the relationship between the mind and the world, between consciousness and reality. I've spent a lot of my career thinking about consciousness, about the mind. And to think about the mind, partly you want to think about the minds we have, the human mind. But it's also really useful to think about artificial minds, the kind of minds we might create. And then you can ask questions like, is an artificial mind a genuine mind? Could an AI system have consciousness? And I spent a lot of my career thinking about that. But you can ask parallel questions about reality, about the world side of the equation. We've got one reality here that we can study, but we can also create new ones. We can create artificial reality and those are virtual realities, virtual worlds. I see virtual worlds as playing the same role in the discussion of reality that artificial minds, artificial intelligence, plays in the study of the mind. And philosophers have thought for a long time about AI. But I find these issues about VR, virtual reality, are actually very rich to think about. And you can approach many different philosophical problems that way and come to all kinds of interesting conclusions.

SPENCER: It seems to me that, if VR technology continues to advance for the next 100 years and humanity doesn't send itself to the Dark Ages or wipe itself out, it's very likely we'll end up in a situation where VR is just really, really fantastic. You can explore virtual worlds that are photorealistic and audiorealistic, and maybe even touch realistic and smell and taste realistic at some point, that you could hang out with your friends in these worlds, that you can do all kinds of impossible things you can't do in the real world, whether it's teleporting or flying or building things just based on speaking words and having it be created. And yet I think a lot of people have this feeling that there's something subpar about the idea of being in a virtual world, that somehow, it pales in comparison to the (quote) 'real world.' What are your thoughts on that?

DAVID: Yeah, well, I think current VR technology is certainly subpar in many ways. It's still pretty primitive, the audiovisual quality is not perfect, you've got to wear these annoying headsets, you don't have anything much in the way of bodily perception or touch pretty well, just vision and hearing, embodiment is very much lacking. But that's an artifact of the current state of the technology. And as you say, fast forward 100 years, I'll be disappointed if most of those things aren't pretty well handled. The body is tricky in VR, but with the right brain computer interfaces, for example, where a virtual reality might interface directly with parts of your brain that represent the body, that represent taste, smell, and so on, maybe that could ultimately lead to a very good embodied experience in VR, along with the vision, the hearing, and so on. You eventually get to the point where the virtual reality might be indistinguishable from a physical reality, if you wanted to go in that direction. Or maybe, even better, it'll be quite different from physical reality, but better in all kinds of ways. There'll be all kinds of new things we can do, new experiences, new forms of embodiment, and then you come up against the question... Well, some people will say, even though it might feel great from the inside, feel like a rich experience, the very fact that it's virtual makes it a subpar reality. And that's the view I want to combat in this book. The central thesis of my book, Reality Plus, is virtual reality is genuine reality, which includes a thesis that, when you're in a virtual world, it's not merely a fiction or a hallucination or an illusion. What happens inside a virtual world really happens. And it also includes the thesis that you can actually lead a meaningful life in a virtual world, that your digital reality is just as good as physical reality in supporting a meaningful life in a genuine reality.

SPENCER: Do you want to set us up with the case against virtual reality? Why do you think people have this sense that it's worse? And then you can rebut that case.

DAVID: In a way, this goes very deep in the history of philosophy. Plato had his example of the cave. Could we be like prisoners shackled inside a cave, looking at shadows on a cave wall? And that was supposed to be very much a second-class reality. Some people think of VR that way. You're looking at shadows on the cave wall. Descartes said we could be being manipulated by an evil demon who gives us sensation as a reality when none of it is real. And some people will see virtual reality as a bit like Descartes' evil demon, a machine for producing illusions in us. The philosopher Robert Nozick more recently — the American philosopher, maybe 50 years ago now — had an example of the experience machine. He said, just say you had a device that you could step into. It'll give you pre-programmed experiences of leading a wonderful life. Maybe you get to be the world's greatest philosopher or the world's greatest mathematician, wonderful family, wonderful friends. But it's all just a pre-programmed virtual world. Nozick said, "Would you step into the experience machine?" He said he would not step into the experience machine and many people take that view, that the experience machine would somehow be a subpar reality. And I actually agree about the experience machine, not because it's virtual. The problem with those experience machines is that it's all pre- programmed. You're basically living out a pre-scripted reality where you're not really doing anything to make all this happen. So none of these are real achievements of yours. There's basically no free action. Inside an actual virtual world though — the kind we're building now, and the kind you can enter even in a video game virtual world — it's not pre-programmed and pre-scripted like that, or at least it needn't be. Inside a social virtual world, you have free will insofar as anybody does. You can control your actions, you can meet new people, you can build relationships, you can build communities, you can take on new enterprises and have new achievements. So I would argue that at least when it comes to a non pre-programmed pre-scripted virtual reality, there's basically no reason why it can't be just as real as physical reality. It's digital, to be sure. There is this idea out there that digital reality is not genuine reality. There's that annoying abbreviation — IRL, in real life — that contrasts real life with digital life. But I think by now we know, digital life has become an important enough part of our life that, merely because something is digital, doesn't mean it's not real. Digital reality is just another form of reality. And I think that goes in spades for the virtual realities we're building. They're not illusions. They're not fictions. They're not fake. They are genuine forms of reality.

SPENCER: You mentioned relationships, and that seems really key to me. If I was in a virtual reality, and I found out that there were no other conscious beings in that reality, I think that would be very disturbing, and it would reduce a lot of the value of that world to me. Whereas, if I was genuinely interacting with conscious minds, and making friendships and so on, I think that would give it a lot more meaning. I'm curious what you think of that aspect.

DAVID: Absolutely. My own view is that consciousness is the main — possibly the only, but at least, the major — locus of meaning and value in our lives. We're conscious beings. That gives our life value. The connections we make to other conscious beings give our life value. Without consciousness, it's not clear there'd be any meaning and value in our lives. So I suppose, inside Nozick's experience machine, it could be, depending on how it's set up, that you are the only conscious being and that everyone else turns out to be a non-player character, philosophical zombie, no consciousness at all. If that were the case, that, I would agree, would be seriously lacking in meaning and value. If you believe you're actually, say, in a loving relationship with somebody and they turn out to be an unconscious NPC, then, okay, the world is not how you thought it was. That's a value theoretic disaster. Your life is, in many respects, bad. But in the kind of virtual realities, even the virtual realities that we make now — say a massive multiplayer online game — you're already in contact with many other conscious beings. Sure, there are some NPCs, but you're in contact with a bunch of other conscious beings using that virtual reality and likewise with the social virtual world. So as long as we think of these as multiplayer virtual realities with multiple conscious beings in them, then I think you're well on your way to having meaning.

SPENCER: We talked about the importance of conscious beings being in these worlds. And you earlier mentioned this idea of choice, that one of the problems with Nozick's experience machine is that you are not actually making choices. I'm wondering, besides choice and existence of other minds, what else do we need in a virtual world to give it value, from your point of view?

DAVID: I think you have a lot of different sources of value. Some of them come from building relationships and building community. Some of them come from the experiences that you have. The hedonist view says that all value comes from having positive experience, like pleasure and happiness, having a balance of that over negative experience, like pain and suffering. I don't think this is the only source of value in our lives, but it's certainly one major source of value. I'm also inclined to say certain things like knowledge and understanding have intrinsic value. Maybe this is just something I say because I'm an academic, a philosopher, background in the sciences, and really value understanding, but I think understanding and knowledge carry intrinsic value. More generally, having desires and having those projects be fulfilled, be satisfied — say, trying to build a family and have that succeed, or trying to build a company and have that succeed, or try to understand something and have that succeed — I think we find value in those things as well. I'm really a pluralist about the forms of value. And I think, to some degree, it's going to be subjective and depend on individuals' values and what they want. And I do think that consciousness is going to lie at the core of much of what we value.

SPENCER: You mentioned knowledge being a potential intrinsic value, and I think that's especially interesting with virtual worlds. Because I think some people would say, "Well, if you're in a virtual world, and you see a red apple, even if it looks like a red apple and smells like a red apple, because the virtual world has some kind of smell capability, and tastes like a red apple, because it has taste capability, it's not really an apple. You don't really have knowledge of an apple." I'm curious to hear your reaction to that.

DAVID: Yeah, it's an interesting case here. There's two versions of the case. One is, you go with the full-scale simulation hypothesis, where we've actually been in a virtual world from the very beginning. And then, every apple we've ever seen has actually been a virtual apple or a digital apple. Then I want to say, "Yeah, real apples are virtual apples, are digital apples. There's nothing illusory about that. There have been apples here all along. They've just been digital apples. That's what we mean by apple."

SPENCER: You're saying if it turns out that we're living in a simulation, and our entire reality is, let's say, a computer simulation and we just don't realize it, right?

DAVID: Yeah. In that case, basically, our world has been a digital world all along. Every apple we've ever seen has been a digital apple. And in that case, I want to say, "Well, yeah, virtual apples are real apples. It just turns out that they're digital, like everything else in our world." It's a bit like them turning out to be quantum mechanical. But the other case, the other tricky case is where it's not that we've been in a virtual world all along, but we create virtual worlds with virtual apples. We can do that now. They're not very convincing virtual apples. But then I think the right thing to say is that, "Well, then, virtual apples are not the same as physical apples." Physical apples are not digital, let's say. But virtual apples are digital. They're something different, but they might nonetheless come to play a very similar role in our lives. Well, it's tricky with apples, because there are tricky issues about food in VR and so on. But yeah, what I would want to say is, at least for an expert user of VR, a virtual apple is not the same as a physical apple, but it's also not an illusion. An expert user of VR won't perceive a virtual apple as a physical apple. If they did, that would be an illusion. But I think expert users of virtual worlds know they're in virtual worlds. They interpret the world they're in as a virtual world. When they experience a virtual apple, they form thoughts basically of the form, 'there's a virtual apple there,' and they're right about that. So virtual worlds here are different from physical worlds — assuming the physical world we started with was not itself a virtual world — but again, just as real.

SPENCER: I think the movie The Matrix can be informative here as an example, and I imagine almost everyone has seen it. But if you haven't seen it, basically people are living in a virtual world, and they don't realize it and there are kind of robot creatures that actually put all the humans there and are controlling things, using the human minds for some ends. And I think what's interesting there in relation to our conversation, is that in The Matrix, if you're eating an apple, you're getting the benefits — or you perceive yourself getting the benefits — that you would from a physical apple. But there's still a sense in which everyone in that virtual world is deluded or that there's some way in which the virtual world is obscuring truth, because the nature of what's going on is being hidden from the people in it. And so I think that linking virtual worlds to obscuring truth, or believing falsehoods, is not totally ridiculous, because there are probably lots of ways that they can hide what's going on. I'm curious what you think about that with regard to truth and the existence of virtual worlds.

DAVID: The Matrix is a really interesting case. It's a beautiful illustration of a virtual world, of the whole simulation hypothesis that we can be living in a simulation. But it does have some extra features. For example, it was created by these machines who want to exploit us; I guess we're used as energy sources. And with humans used as energy sources, it's very important to the machine that no one discovers what's actually happened here. It's important that we had a prior life — that humanity had a prior life — not in a virtual world. We started off in a non-simulated world, and got put into a simulated world. That means there's a whole lot that is being hidden from us about the prior history of our own lives, and so on, and that's bad. But I think you could have come up with another case where we were created, say, by a benign simulator who created this whole universe inside a simulation, and we were creatures of that simulation right from the start. And who knows, maybe they did it out of benign motives or whatever motives. But in that case, there still might be things we don't know. For example, if we're in a simulation, we may or may not know that we're in a simulation, so there's something we don't know about the nature of our world. But I'd still say that the core ordinary common sense things that we think we know — like, for example, that there are tables and chairs, and people and communities and so on — all that will still exist inside a virtual world. It's true that there are some things you don't know. But I guess I'd say that it's not so much that we're wrong in our commonsense beliefs about the world. It's just that we don't know that there are some things that we don't know. It's a bit like not knowing that our world is a quantum mechanical world, and coming to discover that. Or maybe not knowing that there's a god, and coming to discover that. If we discovered we were in a simulation, well, boy, that would be a massive theoretical discovery. Our world is digital, there's a creator, but it wouldn't show that the world we were in was not real. It was real, I think, all along.

SPENCER: I was at a dinner once and we were discussing possible future good outcomes for humanity as a species, hundreds or thousands of years from now. And I mentioned that I thought one possible potential good outcome would be, people live in virtual worlds, but they have a lot of choice about which virtual worlds they live in. Maybe there are thousands or millions of such worlds with different properties and rules, and you can choose which ones to visit. And you can have all of these wonderful experiences and it removes a lot of the needs that we normally have. Suppose that we have plenty of food sources in the physical world, maybe we're fed these food sources automatically. But in the virtual world, we could just have food for fun. And we can have all these experiences for fun, and it can be detached from the normal physical desires and needs. I'm wondering, do you think that that would be a good outcome for humanity or would you be concerned about such an outcome?

DAVID: Choice between virtual worlds is interesting. It does seem that, as the technology develops, we may well have the choice of inhabiting lots of different virtual worlds, just as people already have that choice to some degree with various online virtual worlds and video games, and so on. In fact, one way of reading the title of my book, Reality Plus, is thinking of it a bit like a streaming service for realities. Just as you've got Apple TV plus, or Paramount plus, or Disney plus, or whatever the streaming services are for online content, we can imagine a smorgasbord of virtual realities to choose from. And I think it's good in some ways. People, for example, you have your choice of different kinds of society with different forms of governance, and maybe experiment with different forms of governance, and certainly for purposes of having new experiences, there'd be all kinds of venues for new experiences. I do think at some level though, the real value in our lives comes not just from having experiences, but from building things, from serious projects and relationships, and so on. In the long run, I would hope that these virtual worlds aren't just escapist avenues for new forms of experience, but places where people can build communities and build projects and do many of the things that we do — most of the things that we do in the physical world — just now happening within virtual worlds. So for example, serious political activism and building a society where people's lives are better, working to better people's lives, not just in the virtual worlds, but indeed in the physical world because everything will be interconnected.

SPENCER: Yeah, when I brought this idea up at dinner saying perhaps this could be a good outcome for humanity, I got a pretty strong negative reaction from people. And one of the reactions I got was, "But people wouldn't really be doing things. They wouldn't be doing the important stuff like moving atoms from place to place." The people didn't say it exactly that way, but there was some sense in which, "Well, but what of all the stuff we need to do in the physical world?" And the funny thing is — at least from my point of view — in virtual worlds, a bunch of the things you have to do in the physical world are just no longer needed. For example, we spend a bunch of time traveling around from place to place so that we can see people so that we can do things where, in virtual worlds — if the virtual worlds are actually completely immersive and convincing — you can just see any friend anywhere in the world. Location stops mattering. Or building stuff, if you're building objects in the virtual world, you no longer have to get matter and move it from place to place at great expense. You can just make it appear. There's a bunch of these things that we're used to needing to do, but I think there's also some attachment that those are the real things. The virtual world is sort of, I don't know, it's not the real thing.

DAVID: I do think the physical world matters, and it would be a disaster if, somehow, just going and spending a lot of time in virtual worlds meant that we totally neglected the physical world, not least because all these virtual worlds — the virtual worlds we create — are all grounded in the physical world. If we lose control of the physical world, we'll also, to some degree, lose control of those virtual worlds. Also, I presume there are going to continue to be people in the physical world, even if some of the more privileged people get to hang out inside virtual worlds. There'll be people with better and worse access. We've got to make the world as good as possible for everyone. But that's just to say, "Okay, we should attend to the physical world. What happens there matters." But that's totally consistent, I think, with 'what happens in the virtual world matters,' in many respects, just about as much. We want people who are in virtual worlds to have lives which are as good as possible. There can be suffering and exploitation in virtual worlds. We want to get rid of that. It may well be that we can build societies in virtual worlds that are in many respects better than societies in the physical world. You mentioned some of the respects in which virtual worlds can easily be better, for example, minimal needs for transportation. Another thing is an abundance of material goods. Inside the physical world, you build a house, it's still really hard work to build another house. Whereas, in the virtual world, you build a virtual house, and it's trivial to duplicate and build any number of virtual houses, more or less exactly the same. There's near unlimited space, this abundance of physical goods, which could make for the possibility of new forms of equality in virtual society. There's also obviously all kinds of ways it could go wrong. It could turn out to be exploited by the people who create it in ways analogous to the ways in which the matrix is exploited by the machines, people could lack privacy and autonomy. But I do think there are a lot of possibilities there. And there's nothing there which is meaningless, in principle. Moving bits around is, in principle, just as important, I think, as moving atoms around.

SPENCER: You mentioned some of the darker sides of this. I think that's another concern people have, is the immersiveness also gives control. Right now, many people feel totally addicted to their smartphones. But imagine you're in a virtual world where a company actually has control of the complete 3D environment, the sights, smells, tastes, even physical feelings that you experience. There's something very unsettling about that.

DAVID: It's true. The creators of a virtual world are like the gods of that world, potentially all- powerful, all-knowing, and so on. And you think, yeah, the kind of control that social media companies have of your social media algorithms is intrusive. Well, wait till they're controlling the whole world around you. There's losses of privacy, losses of autonomy, everything you see might be — already, this happens in the physical world, with advertising and the like; we're constantly being manipulated — but in a virtual world, this could go into a much greater extent. It may be that, for example, people with enough money or enough resources will be able to, say, pay to inhabit virtual worlds where they have certain degrees of autonomy and privacy, and so on. But at the same time, it may be that people with fewer resources, well, they get free access to virtual worlds but, as with many other things in the digital world, the condition of entry is that they give up a certain degree of privacy and autonomy. So we could have unequal access to autonomy or to self-governance, and so on, which could itself be a real problem. I don't know what's going to happen with governance of virtual worlds, but I certainly hope they don't all end up being corporatocracies just run by companies like Meta or Google or Apple, but they turn out to be user-controlled and user-governed virtual worlds, not to mention state-controlled and state-governed virtual worlds. But yeah, there are going to be any number of big new political problems to worry about in virtual worlds on top of all the old ones we know and love.

SPENCER: Certainly recording everything someone does is much easier in a virtual world. It's essentially all going through the servers anyway.

DAVID: Yeah, I guess people are going to have the choice to basically record their whole lives in a virtual world, at least for their own purposes, or not. And then, yeah, could someone else be doing that to you and exploiting that? Well, we already face that issue with our whole digital lives, everything we do by email or messaging or social media, and so on. But yeah, this would just be the whole thing squared or cubed.

SPENCER: Now, a couple of times, you've mentioned the simulation hypothesis. Let's dig into that more. Can you tell us about the origin of the idea of the simulation hypothesis and the case for it?

DAVID: The simulation hypothesis is the hypothesis that we're actually living in a computer simulation right now. We're living in a virtual world created by a simulator with simulation technology. In some ways, this hypothesis has its roots throughout the history of philosophy. You find versions of it, at least in Plato's cave, or Descartes's hypothesis that we might be dreaming now. But the genuine simulation hypothesis really doesn't come up until you get to the computer age, because it's key to the simulation hypothesis as we think of it today, that we're in a computer simulation. So you'd need to have the idea of a computer to formulate that. Actually, when writing this book, I was interested in looking at antecedents and origins of the simulation hypothesis, and where does it really first appear. Basically, science fiction is the place to look, because ideas of these dream machines had already been around for a couple of centuries, but for the genuine computer simulation, you find things like it in Arthur C. Clarke and a couple of early science fiction stories. Maybe the classic introduction of it is in the science fiction novel Simulacron-3 (or is it Simulacron-4? I never remember) in 1964, which was made into the movie, World on a Wire, by Rainer Werner Fassbinder, and served as the basis for The Matrix, The 13th Floor, and many other classic simulation movies. So then the simulation hypothesis, from there, found its way into philosophy and many philosophers discussed it. I think it probably really got a boost in recent times from Nick Bostrom's work on the simulation argument. What Bostrom introduced was to take the simulation hypothesis, which had been around for a while, but add a statistical argument that we should take the simulation hypothesis seriously. The rough idea is, other things being equal, you would expect any given society will eventually create a whole lot of simulated worlds and a whole lot of simulated people, maybe thousands, millions, billions more simulated people than non-simulated people. And then you start thinking probabilistically. Maybe the odds are that we ourselves are simulated people just because there are way more simulated people than not. All that's going to depend on various assumptions. Maybe ways to get out of this might be that, although civilizations die off before they create virtual worlds, or maybe simulated consciousness is impossible, but nonetheless, this statistical argument, the simulation argument, I think, provide an extra kind of boost for taking the simulation hypothesis, previously science fiction, and thinking of it as a genuine, live possibility driven by the availability of the existence of this technology that we already have primitive forms of.


SPENCER: Let's walk through the argument in small steps. I just want to make sure people understand, really, why it works or doesn't work as an argument and how they should update on this, like should this actually convince them that they might be living in a simulation? It starts with this assumption that advanced civilizations will eventually start simulating lots of beings. Is that right?

DAVID: Mm-hmm, that's a natural way to do it. Yeah.

SPENCER: And so if civilizations eventually simulate lots of beings, there's a second piece of the argument, that eventually the number of beings they simulate will be vastly more than the number of non-simulated beings.

DAVID: Yeah, in the version of my book, I tried to put some numbers on this, just to make it concrete for a first version. We say something like, "All you need is at least one in ten unsimulated populations." Each creates, say, a thousand simulated populations. And then it looks like, from there, you get to "at least 99% of, say, intelligent beings are simulated." So then you get to 'a large majority of all beings, or of all conscious beings, are simulated," and from there, you make another move, and you get to "it's at least 99% likely that we are simulated beings."

SPENCER: And it seems like that last move you just mentioned has at least two different pieces to it. One, which you mentioned earlier, is that it assumes that simulated beings are conscious, which some people think would be true, some people deny. But there's also another piece there around this idea, that anthropic principle around, "Well, is it really fair to say, I'm equally likely to be any one of these beings?" And so do you want to just touch on that assumption baked in there?

DAVID: The second one basically turns on what philosophers call an indifference principle. Say you have 1000 beings in the universe with experiences like mine, and you know that, then it looks like, in principle, I'm equally likely to be any one of them. Maybe it turns out that, of the beings with experiences like mine, 999 of them are in simulations, and one is not, then we say, "Okay, only one in 1000 that I'm the one who's unsimulated." And it's certainly true that a way to respond to this argument is to deny the indifference principle. And some philosophers have wanted to do that by saying, basically, you shouldn't just give equal weight over experiences, maybe some hypotheses get higher weight. I think it's hard to make this run, and I've got various things I say about this in the book. Then the other way out — one other very natural way out — is to deny that simulated beings will be conscious, or maybe they could be conscious but they could never be intelligent, and then say, "Look, we're conscious, and we're intelligent. Therefore, we're not simulated." And for that, you've got to make the case that actually simulated beings could be conscious, and that maybe I make the case in various things I've written, that a simulated brain could, in principle, have exactly the same conscious experiences as the brain that it's simulating. But I could be wrong. And so you might say, "Here's one way the argument could go wrong, is at least a 25% chance that simulated beings are not conscious." If that's right, then we ought to at least reduce credence in the simulation hypothesis correspondingly.

SPENCER: Have we covered all the assumptions baked in or are there any others you would point to?

DAVID: Oh, there are really so many, I've divided them into two groups. One kind of assumption is what I call a sim blocker. And this is a reason for thinking that actually, maybe simulated universes will not be nearly as common as unsimulated universes. Those will include a couple that Bostrom considers. One thing that he considers is, everyone will die off before they get to the point of creating simulations, maybe something about technology, maybe to get those simulations, you need AI, and once you get AI, you're already doomed. A lot of people take that seriously. If that's right, maybe we will never get to simulations. Or we get to the point where we have simulations, but people choose not to create them, because we think it's a bad idea for various reasons, or it's unethical. So there are various sim blockers that might have the consequence that there are not so many simulations. There's also what are called sim signs and non-sim signs. And these are basically objections of the form, 'We're special. There's something about our experience that means experiences like these would be relatively unlikely in a simulation.' And the paradigm case of a sim sign might be consciousness. If you believe that simulated beings will be unlikely to be conscious, then our consciousness would itself be evidence that we're in a non-simulation. But maybe there are others. Maybe there are forms of our experience. We seem to be in this enormous universe. Maybe most simulated beings would experience just being in a very small universe. So maybe that provides some evidence we're not in a simulation. I'm doubtful about that, but I do think you need to actually consider both classes of objections here. In one of the chapters of this book, I really tried to break down seven or eight possible sim blockers, seven or eight possible sim signs, and then say, "What should these do to your probabilities?"

SPENCER: I think a lot of people assume that Nick Bostrom, having come up with the argument, believes we're living in a simulation. But if I recall correctly, I think he only assigns something like a 20% chance that we are living in a simulation. I'm curious where you land. Considering these arguments, how likely do you think it is that we're in a simulation?

DAVID: I think that something like that Bostrom estimate is not bad. This is not a conclusive argument, that we're living in a simulation. There are just way too many ways that things could go wrong, all these sim blockers and sim signs. I think in the book, I suggest, yeah, it wouldn't be totally unreasonable to be, say, 25% that we're in a simulation. In my book, I say, "First, are conscious human-like simulations possible?" I think it's reasonable to say it's more likely than not that conscious human-like simulations are possible. Second, if they're possible, will many populations create them? Will enough of them be created that, say, they outnumber non-simulated people? And I think it's reasonable to say that, if they're possible, then many of them will be created. So I give at least 50% credence to 'they're possible,' at least 50% credence to, 'if they're possible, we'll create them.' So we ended up at something like 25% overall.

SPENCER: And can you just unpack some of the reasons that they will be created? Because people might wonder, "Well, why would people make so many simulations?"

DAVID: If nine in ten societies decide this is a bad idea, we just need one of those societies to create 100 or 1000 simulations, as it were, and then the simulations will outnumber them. Because all of this is subjective, it may well be that there comes to be some universal principle against creating simulations that everybody abides by. But I'd say that the incentive to create simulations could be quite strong. In many ways, people are already creating simulated universes left and right, for the purposes of science, as well as the purposes of entertainment and the purposes of finance, and so on. It would be surprising if those incentives get smaller. Anyway, that's why I end up at 50% credence in 'these simulations will be created.'

SPENCER: And 25% overall for the argument is not so high. But also for such a shocking hypothesis, that's actually not so low. If it's actually true we're living in a simulation, it seems like that would dramatically change our view of what reality is or what the universe is, or the nature of God, etc. So, if that hypothesis is true, what do you make of that?

DAVID: Basically, don't take the numbers too seriously, 25%. But I do think that it's a hypothesis — which is the live hypothesis — that we should take seriously. And yeah, the next question then is what follows. Some people think, if we're in a simulation, that means that nothing in the universe is real, and everything is meaningless. I've already said why I disagree with that. I think, even if we're in a simulation, there's still real things around us, there are still real people, real communities, real projects. That said, it would nonetheless be a massive development in our understanding of the universe. For a start, if we're in a simulation, then there's presumably a simulator. And we'd want to know something about the nature of this simulator who would have some godlike relationship to our universe. We would care a lot about the motives of this simulator, just as we care about the motives of a god. We want to know, is this going to keep going? What is going to be the future of our world? Do we get uploaded at death, for example? Many of the motives of traditional religion might enter into thinking about the simulation hypothesis. Not to mention, once we know that we're just at this one level of reality — we're at like level 42 in Reality Plus — I think we suddenly get very interested in knowing more about the world we're embedded within. Could we, for example, come to know about the world in which our simulation was created? Could we come to travel there, ultimately, to be reembodied? I think all of these questions would then become open. It's a little like going from, you just knew your small town to 'Hey, well, there's a whole big country' to 'There's a whole big planet' to 'There's a whole big universe,' of which we're a tiny slice. And then yeah, so 'There's a whole big cosmos of universes,' of which we're a tiny slice. I think that would have all kinds of effects.

SPENCER: This is making me think about the Rick and Morty episode where Rick ends up building an entire universe to power his car battery. And then the beings in this universe are trying to figure out why they exist. And one of them discovers that the whole purpose of their universe was to power a car battery. And you can imagine these kinds of incredibly trivial reasons why someone bothered to simulate us.

DAVID: Yeah, this is the case where the simulator turns out to be godlike in some respects, maybe all-powerful, all-knowing, but there's certainly no reason to think the simulator is going to be all good. And yeah, the case where it's Rick who created the universe just shows you some of the possibilities of a non-benevolent simulator.

SPENCER: Changing topics now, I want to pick your brain about some philosophical questions that have been bothering me for a while and get your perspective on them.

DAVID: Sure.

SPENCER: One of them, you mentioned probabilities when you were talking about the simulation argument. And I tend to think that probabilities are the right way, usually, to think about complex topics. And in particular, I'm a fan of Bayesianism in principle, if not in practice. I don't think you should always be calculating using Bayes' rule, and it's totally impractical. But I think at least it gives us mathematical guidelines for how to think probabilistically. For those who don't know, the basic idea is, you have some belief about the world, you assign it a probability, you get some evidence, and then Bayes' rule tells us how we adjust that probability based on that evidence. And there's a simple rule of thumb that you can prove mathematically, which is basically you ask, "What's the probability of seeing that evidence if your hypothesis is true, compared to the probability of seeing that evidence if your hypothesis is false?" And this tells you how much to update, change your mind. One thing I've observed is that just talking to a bunch of philosophers, I find that philosophers don't seem to tend to think in this Bayesian probabilistic way. And I'm wondering if you think I'm wrong about that, or if you think I'm right, and if so, are they making a mistake? Or is it something about philosophy where it's not the right tool? I'm curious to hear your thoughts.

DAVID: Oh, I don't know. Maybe we hang out with different philosophers. Bayesianism is incredibly popular among philosophers these days, in a way it might not have been 30 years ago. People doing epistemology did it very informally, in terms of knowledge and belief, and so on, but sometime maybe around 20 years ago, epistemology took a formal turn, and for a start, thinking of belief — not thinking of things in terms of full belief, but in terms of degrees of belief — credences, or probabilities. And then among the people who work with degrees of belief, Bayesianism is by far the most popular framework for thinking about how you should update beliefs in light of evidence. There are other frameworks, but everyone, I think, treats Bayesianism at least as the default or the one to be knocked off its perch. These days, there's this huge project of formal epistemology of taking all the problems of philosophy and reformulating them in Bayesian terms. So I'm interested to hear where you've had these experiences of philosophers rejecting Bayesianism.

SPENCER: It could be I'm just hanging out with the wrong philosophers. But I'm actually not talking about the formal project of epistemology. I'm talking about just talking to philosophers in the way that they think about their philosophical problems. I don't feel like they're trying to put their own thinking in Bayesian terms when they're working on philosophical problems. Again, I could be totally wrong about that. But yeah, I'm curious if you think differently.

DAVID: That's true. When I do philosophy, sometimes I do it with degrees of belief and think about what should those degrees of belief be and lie to the evidence, but sometimes not. There's one thing which is kind of weird about doing things in Bayesian terms with degrees of belief, which is the moment you actually name a figure — the way I named 25% and so on — the moment you name a figure, it seems like specious precision, like where did that number come from really? Is it really 25%? You got that from 50? Why is it 50? Isn't that arbitrary? And the Bayesian ultimately wants to ground all this in certain priors, but then why are those your priors? There are indifference principles and the like you can use, but they're always controversial. So I think for some people, it's the specificity of Bayesian reasoning that helps put them off. Of course, there are ways you can reason just about likelihood boosts and so on, without all that, but maybe that's part of it. I guess the other part of it is that just so much of the philosophical interest in Bayesian reasoning is pushed to the priors. Sure, you can update in light of evidence and push things in a certain direction. But so much of epistemological interest is, why is that the rational prior to have about this? Imagine a superbaby coming into the world. If superbaby's got some wonderful priors, then they can just update on evidence and be a good Bayesian the rest of their life, but so much of what's a philosophical interest isn't going to take the form, "Why is this the right prior probability and not that one?"

SPENCER: Another question I have that bothers me, related to philosophy, is this robot thought experiment, my favorite thought experiment in philosophy of mind. And the way it works is, imagine someone, every day they go in to a surgeon, and this is an advanced world where surgery has gotten much better and our technology is much better. And each day, the surgeon makes a tiny incision, removes a very small part of their brain and replaces it with a mechanical or robotic version of that part of the brain that behaves extremely similarly. Within the rules of physics, it's as close as possible to behaving the same way as that little part of the brain. And we can imagine that, if this is done, then after each of these surgeries, the person feels unchanged, because a very, very tiny portion of the brain was removed, but it was replaced with something functionally the same. And so they wake up from surgery, they feel the same, they act the same, they think the same. And so it feels like there's this continuity, that this continues to be the same person throughout the different surgeries. But then, little known to this patient, what was actually happening was that, each time these little parts of the brain were removed, the surgeon was carefully preserving them. And then at the end, after all these surgeries have taken place, this person's brain has been completely replaced. So now they have a completely robotic brain, no parts in common with the original brain. And the surgeon goes, without them knowing it, and reconstructs all the biological pieces into a new human brain that is exactly configured like the original human brain. In other words, we have these two versions of this person now, one where they're slowly getting these parts of the brain swapped out with robotic versions and they end up as a fully robotic version of themselves, and the other, which is this brain that's made of the biological parts that gets slowly assembled bit by bit until it's just exactly the original brain of all the original parts, all the original molecules. And so then this raises the question, well, the simple version of the question is, "Well, which of them is really this person?" But I prefer it in a version that says something like, "Suppose that this person is totally selfish. And one of these two minds is going to get tortured later on. Which of them would they choose or should they choose to be tortured, the robotic one or the one that has all their biological parts?" And so yeah, I'm curious how you analyze this philosophical thought experiment because I find this really troubling, this thought experiment.

DAVID: It's an interesting scenario. I've talked about scenarios like this way back in my first book in the mid-90s, I talked about replacing your neurons one at a time by silicon chips, and what happens to your consciousness. And I at least tried to argue there, that the being you would get to at the other end, the whole silicon being would, in principle, be conscious, and the consciousness could be preserved throughout. But now you're raising, not so much the issue of consciousness, but the issue of identity over time. So I think it's very plausible that in your case, where you end up with a silicon being, a silicon brain and a biological brain, they would both be conscious beings. The question is, which one would be the same person as the one present at the start, and it seems that there are three options: one is, I will be the silicon person at the end. After all, my stream of consciousness at the beginning is continuous to that stream of consciousness at the end, which is not the case for the biological being. The second possibility, I am the biological being because they're made of the same stuff as the original. Or third, I am equally both of them. I am most inclined to go for the third view, that I'm equally both of them. It's a little like a fission case where you split my brain and transplant the two hemispheres into different bodies and say, "Which one of them now is me? Which one would I prefer to be tortured?" And I think in that case, most of us will say, they're both me, so exactly the same extent, and yeah, that's really weird. But torture one or the other, it's not a big difference. I guess I'd say, given your scenario, I'd most like to treat that as a fission case, but we don't understand personal identity. I think to really get to the bottom of this, we need a theory of what makes a person the same person over time.

SPENCER: Yeah, I think what bothers me slightly about that perspective is saying that they're equally me. I definitely see the case that they're both me in an important sense. Whereas, in the fission case, if you just split your brain in half, left hemisphere, right hemisphere, put them in two different robot bodies. Okay, it feels like, well, left hemisphere, right hemisphere, about equally important, and so, okay, they're equally me. But in this case, they have very different relationships to me, or claims to be me. And so giving them equality as being me, it bothers me. Not to say that I know any better answer, I certainly don't.

DAVID: We all have this intuition. There are deep facts about being the same person over time and one person is me and the other person is not me. It's as if there's a Cartesian ego that goes one way or the other way. But yeah, I've gradually evolved towards the view where I'm not even sure there are these deep facts over time. There's just beings at this time, there's beings at that time, and there are different relationships between those beings over time which we might come to care about, or might not. There might come to be a relationship of biological identity, and another relation of functional identity, and different people could choose to care about one more than the other. Sometimes I'm tempted by the Buddhist direction, which says, actually there is no deep self that persists over time. You shouldn't just be looking for answers to those questions about the self because that kind of 'selfie-ness' is ultimately an illusion. And at the end of the day, that might be the right answer.

SPENCER: Yeah, I've seen some people take it in that direction. They're just like, "Well, all these philosophy of mind experiments show that our self is just an incoherent concept. Maybe every moment, we're just a new being. And maybe our relationship to our future self is not different from our relationship to all other beings in existence," or something like this. But then it also feels like there's this incredibly strong feeling that, "Oh, wait, I am going to experience the next moment of my own mind, but not someone else's," and it's hard to square that with this idea that there is no self at all.

DAVID: Yeah, it's a very strong intuition that I'm going to be somewhere or another. But then the question is, do I actually have evidence for that intuition? Is denying that consistent with my evidence? In many cases, we've got intuitions that we end up not accepting, like our initial attitude towards colors is there. There are colored things out there in the external world, and things have these qualities of redness and blueness spread all over them. And then we come to think, okay, maybe it's not like that in the external world. There are just these things in our mind and we're okay with that. I call this the fall from Eden in the book. It's like we first believed in Edenic colors, these primitive things out there in the world. And then, no, we just came to believe that physical things that reflect light into our minds. But yeah, it's possible that personal identity or the self could turn out that our natural model of the world (the Garden of Eden). In the Garden of Eden, we had actual primitive cells that continue over time. But then somehow, we had a fall from Eden, and then we realized, the world is not like this, and maybe we can just get used to that. Of course, some people want to push this all the way to consciousness itself, as being, we have the intuition that we're conscious, when in fact, we're not. And that's the final step that I'm not willing to take here, because I think we have undeniable evidence that we are conscious. But people have given up on one thing after another. You give up on colors, give up on strong free will, maybe you give up on the self. One natural path is to give up on consciousness, too.

SPENCER: Yeah, I've seen more and more people in my social circles, over the last five years, start saying that they don't believe in consciousness anymore. And this view really confuses me, because it's hard for me to think of anything I have more evidence of than that I'm conscious, because it seems like something that I just experience at every moment, and I directly experience it. Whereas, almost everything else I know is indirect. It's an inference from, I have this visual pattern and I make an inference about there being these things out there in the world. Consciousness, I just directly experience. I directly experience redness and pain and happiness. I'm curious to hear your thoughts on, is there a surge in this view that consciousness is an illusion? And if so, why? Do you have sympathy for it?

DAVID: I actually do have some sympathy for it. At the same time, at one level, I think it is kind of unbelievable. As you say, consciousness seems to be the thing that we're absolutely certain of, that we have direct evidence of, how could one possibly deny it? On the other hand, it could be that you could actually give some kind of physical or algorithmic explanation of why we have these intuitions. I call this the meta problem of consciousness. This is the problem of explaining how and why it is that we actually have these intuitions, and it could be that that's just a bit of behavior, these intuitions. Maybe that can be explained algorithmically. And then ultimately, a good enough solution to the meta problem might explain why it is that we find illusionism unbelievable. Just say we came up with a great algorithmic explanation of why beings like us will be absolutely sure that they are conscious. Then the question is, should that maybe diminish our confidence that we are conscious and make us take more seriously the idea that it's an illusion. I'm not prepared to take that final step but I do find it a fascinating view. If I was to be a reductionist — to take a broadly reductionist approach to consciousness — this is the approach I would take, find a solution to the meta problem, and then call that a solution to the hard problem because it explains why we believe in this thing. As you say though, ultimately, it's almost impossible to believe because consciousness just seems to be this thing that we have direct evidence of. An illusionist is just gonna have to reject that. They're gonna say there's no such thing as primitive, direct evidence. It's just a misleading appearance. It turns out that, yeah, it's actually very hard to get illusionists to come right out and say they're denying that we're conscious at all, But that, I think, is the most consistent line for them to take here.


SPENCER: One thing that strikes me as very strange about consciousness is, if we assume that it's a real thing — we're actually having experiences, there's something that's different about being a human than being a rock because there's internal experiences in the human — if you were an alien from another dimension, and you read our physics textbooks, I think you would have no reason to think that our universe had consciousness in it based on all the equations in the physics textbooks. And yet, it seems to be something that is incredibly, incredibly important about our universe. And I find this very odd that, somehow, there seems to be this thing going on in the universe, that we seem to have no way to capture in our best attempts to describe the universe through physics. And I'm curious if you have thoughts about that tension there.

DAVID: Yeah, this is the underlying puzzle that got me into philosophy and worrying about consciousness in the first place. Why is there consciousness at all? Look at the world from the objective viewpoint, say the viewpoint of physics. There seems to be not much reason to postulate it. Maybe you could get some physical explanation of why we have these complex brains and why we behave in certain ways. Maybe even it would explain why it is that we say that we're conscious, why people go around making noises like, "I am conscious." But would it actually explain consciousness? Consciousness itself seems like a surprising feature, relative to all of that. From here, you can go in a number of different directions. You can say, maybe eventually, we'll have a physical explanation of it. It's hard to see how that goes. You can say consciousness is something primitive, over and above physics, maybe a form of dualism. You can say consciousness is actually part of what makes up physics. This is the panpsychist view; consciousness is everywhere, and it's ultimately what physics is about, it's about some primitive play of consciousness. Or you can say consciousness is an illusion. In fact, all there is, is physics. And there are some intuitions about consciousness, but consciousness itself is not real. That basically is a way of generating this dilemma at the heart of the problem of consciousness. And then I don't know what direction you end up being pushed by all the different considerations.

SPENCER: Well, one thing that really bothers me, especially about this topic is, if you take something like unifying gravity with quantum mechanics, although we have no idea how to do this right now — or maybe we have some inklings but it really seems like we don't have the right approach — it least seems like the sort of thing that science knows how to do. Like, we've solved problems like this before, and it seems like consciousness, it's not clear to me we even know how to solve a problem like this or that the tools of science as we know them even have the capability to solve a problem like this. And I'm wondering, do you share that intuition that there's something different about that kind of problem that makes the scientific tools not necessarily applicable?

DAVID: This is pretty much what I was getting at all those years ago when I called this 'the hard problem of consciousness,' to contrast it really with the easy problems. The idea was not that the easy problems of explaining behavior, or explaining language and memory and verbal report, the idea was not that those things are trivial. It's going to be decades, maybe centuries, to explain them properly. The idea was, we have a research program that we understand, to explain those things, to get at the easy problems of sight, learning and memory. There are ultimately problems about how it is that the brain or the organism does something, how it plays some role. And to solve those problems, you specify a mechanism — maybe a neural mechanism or a computational mechanism — show how it plays that role, then you've solved the problem. But for consciousness, it looks like it's just not that kind of problem. It's not a problem of explaining how it is that the brain produces some particular behavior. You can explain all those behaviors, all those objective functions, and we still have the hard problem arising which is, why is all that accompanied by subjective experience? Why does it feel like something from the inside? So it's just a different kind of problem, and it looks like we need a new paradigm for explaining it. The paradigms I like best don't even try to reduce consciousness, but just try and connect it as well as possible to everything else we know and then ultimately find the fundamental laws that integrate consciousness with the physical world. But there are other projects like the illusionist project here, too. But even there, it says, "Yeah, let's just explain the things we say about consciousness." And that's to say, again, that's recognizing there's just a different kind of research program here. And I think most people agree that the standard research program is limited when it comes to consciousness. The question is, where else do you go?

SPENCER: Are you talking about, for example, experiments that will look at when a human is able to report conscious experience? I know that there are cases where a person says they can't see something, but you know that they actually can because, if you throw a ball at them, they'll catch it. So they don't have a conscious experience of a ball, but they somehow are aware of it on some other level, things like that.

DAVID: That's one bit of work. The science of consciousness, the neuroscience and psychology of consciousness, has really exploded over the last 30 years. In fact, this June in New York, the Association for the Scientific Study of Consciousness is meeting at NYU. We'll have a whole lot on, for example, the neural correlates of consciousness, the distinction between conscious and unconscious processes, all the things people can do unconsciously. And it is the case that the science of consciousness tends to rely on verbal reports as its major guide to where consciousness is present. If someone says they're consciously experiencing a given stimulus, you accept that they are. They say they're not, you accept that they're not, unless there's some reason to doubt them. So that will help lead us, I think, to this science of consciousness that correlates consciousness with underlying physical processes. And ultimately, we can try and systematize that, maybe even ultimately find some fundamental laws. There are theories out there to know — this integrated information theory, for example — that purport to give fundamental laws of consciousness. But we're still very early in that process,

SPENCER: Not to be a wet blanket, but I worry that even if we pile up all the scientific facts about consciousness, and when it appears and its correlates and so on, that somehow, it still won't be getting at the core thing.

DAVID: The route that I ended up going is saying that some things in the universe are fundamental. Space, time, mass, at least in our classical view, are fundamental. You don't really explain why they exist. You just have fundamental laws that govern them. No one explains why the fundamental principles — say fundamental law of gravity — is true on the Newtonian picture. Maybe a unified field theory will explain that in terms of something more basic, but you have to take some things as basic. So I'm inclined to say maybe consciousness is one of those things we have to take as basic. Just say, we have a fundamental law like the integrated information principle that says, when you have a certain information structure in a physical system, you have a corresponding state of consciousness. Maybe that will itself have to be taken as a fundamental law of nature. And that's frustrating. We hope to do better. You might have hoped we could get consciousness from the physical basis somehow for free as we seem to get chemistry or biology for free. But I think one possible moral of this problem of consciousness is, we have to treat problems of consciousness as, in some ways, more analogous to the status of physics, where we're looking at fundamental properties and fundamental laws.

SPENCER: Another thing I wonder about in terms of the connection of consciousness and science, is the role of consciousness in evolution. You could imagine beings evolving and learning to survive and getting all these traits that help them survive, without ever being conscious. And it creates this weird question which is, did consciousness arise because it was useful in some way and it aided survival? Or did it arise as a coincidence or some spandrel, like it's not actually having a functional purpose? And if it's the latter, if it's actually not functional, it seems like a really, really strange coincidence to occur. Whereas, if it is functional, if it's serving a purpose, then I find that confusing in its own way, which is, well, what is the purpose? Why couldn't we do the same things without it?

DAVID: Yeah, it's a great question and nobody has the answer to this. There's any number of functions that intuitively you might think consciousness does. It gives us information about the world. It guides our behavior, gives us control, allows us to integrate, allows us to plan, allows us to make decisions. The trouble is, almost all of those functions look like they could be performed without consciousness, both in principle, and often in practice. No one's ever come up with a function that looks like you would absolutely need consciousness to do it. One way of putting the question is saying, "Well, why couldn't evolution have done just as well by producing philosophical zombies, systems that are physically just like us with all the behaviors but no consciousness?" It looks like that's a coherent possibility. And if you think physics and neuroscience and so on are closed, it looks like zombies would have reproduced themselves just as well as humans do. So why is consciousness needed? It could be that non-physical consciousness plays some special causal role in the universe that couldn't be played without consciousness. Maybe consciousness collapsing the quantum wave function does something really special. Or it could be that, as you say, consciousness is a spandrel, what gets selected for certain information structures in the brain, and that happens by virtue of these fundamental laws to give you consciousness. But it would be very surprising if something as important as consciousness was a spandrel. So I think this is just another one of those cases where we really need a good theory of consciousness to answer the questions. But right now, it's just a dilemma.

SPENCER: Well, these questions about consciousness might sound abstract and interesting but maybe not important. We might be becoming a civilization that actually needs to answer these questions. We think about AIs and large language models that, in so many ways, seem intelligent. We could start asking the question, "Well, could we accidentally make something that's conscious?" And if the role of consciousness is functional, if it actually helps us do things — like maybe consciousness is needed for certain types of planning, or certain types of problem solving or something like this — then as we try to build AIs that do all these things, maybe we'll accidentally stumble on creating conscious minds. And if we do that, maybe these minds can suffer. Maybe now there are moral questions that come into play. And we have to think about the ethical treatment of AIs, which sounds crazy and science fiction-y but how do we know that we don't have to do that? What are your thoughts on consciousness and these AIs we're building today?

DAVID: I think questions like that are already important, even before you get to AI systems, whether it's human beings. What is an infinite experiencing? Might they be suffering? Obviously, it's incredibly important to think about animal consciousness and animal suffering because, if fish are not conscious, then they can't consciously suffer. The moral question is going to be much less pressing than if it turns out the fish are conscious and can suffer. So those are already incredibly important. It's incredibly important to sort out consciousness in all these creatures. But AI just adds a whole new dimension to that, because for a long time, people thought, well, existing AI systems are not conscious. Maybe we'll eventually have AI systems which are conscious. But now with the rapid explosion in AI over the last ten years, it's suddenly at least become a question that we can reasonably ask about existing systems. Are they conscious? The Google engineer Blake Lemoine last year thought he had pretty good reason to think that one of their language models is conscious. Other people were doubtful. I think actually it's a complicated and interesting question. I recently gave a few talks and wrote an article on whether current language models are conscious, and whether their successors in the next ten years might be conscious. I ended up with a fairly low probability for being conscious right now, a bit under 10%, which is still substantial. But on the other hand, a much larger probability, maybe something over 20% for them or their successors being conscious within the next ten years. Because the various obstacles to consciousness in AI systems, present and current systems — like maybe the lack of senses and the lack of a body — might well be overcome in the successors to current language models with a much more integrated...say, with images, other sensory information, and control of at least a virtual body. So I think consciousness in these AI systems is coming quite possibly quite soon. And yes, at that point, we really have to think about the moral question. Is this something we want to do? Is this the path we want to take or are we just going to be creating a whole new class of beings that we're exploiting and that are suffering? It will be a disaster if we create these systems just accidentally, by the by, without even realizing it, and possibly creating all this suffering. So I think we absolutely have to reflect on this. And this is a point where thinking philosophically about consciousness is going to be very, very relevant to these practical questions.

SPENCER: It's also disturbing to think that these AIs, if we ever make them conscious, could have huge amounts of perceptual experience. There could be millions or billions of these AIs, but they also might experience things at a much faster rate. So maybe in an hour, they could have a million years of experience, which is really crazy and disturbing to think about.

DAVID: Yeah, you might worry about the training process for a lot of these machine learning systems that go through a lot of negative feedback. The case of reinforcement learners, I guess, is especially worrying, all that negative feedback, does that correspond to suffering? Does it mean during the training process, are these systems going through huge amounts of suffering at a very fast rate? Over days of training one of these systems is just some enormous unprecedented amount of suffering. It's a science fiction-y kind of question, but I think we're getting to the point where we need to actually ask those questions. And if it turns out that we have actually serious reason to think that, say, certain training methods are possible sources of suffering, then we need to think very carefully about using those training methods.

SPENCER: The last question I want to touch on before we wrap up is the PhilPapers survey. Can you tell us just a little about what it is and why you ran it?

DAVID: Oh, yeah. People make claims about what most philosophers are: materialists or theists or utilitarians. We never really had data. A bit over ten years ago, my former student, David Bourget, and I set up this big database for philosophy called PhilPapers, initially a bibliographic database of most works in philosophy and pretty much all philosophers these days — at least all Anglophone analytic philosophers — use this system. At a certain point, we realized we might actually be in a position to run some surveys of philosophical views, to see what philosophers actually believed. So back in 2009, we ran the first PhilPapers survey with 30 questions, just questions like mind: materialism or non-materialism; God: theism or atheism; normative ethics: consequentialism, deontology, or virtue ethics. And so we got interesting results. For example, 56% of philosophers were materialists about the mind, 28% were non-materialists. 73% were atheists, 13% were theists, and so on. Now, then, we did it again in 2020, just fairly recently. We expanded the list of questions from 30 to 100 to get more data, expanded the population that we were surveying, and also did longitudinal studies of how these things changed in philosophers over time. And yeah, the results are really interesting. For example, virtue ethics has become more popular over time, it seems, which is interesting. Certain views like non-classical logic, much more popular in 2020 than it was in 2009. Anyway, it's just nice to have data about these things. And one interesting spin-off has been in articles where philosophers used to say, "Yeah, well, the orthodox view is blah, blah, blah," now they can actually cite empirical results, survey results, about what it is that most philosophers in these populations believe.

SPENCER: It seems to be a very valuable service to have this out there. But as a layperson, what really struck me going through it is how much disagreement there is among philosophers. I actually sat down with a philosopher friend of mine, and we kind of walked through the whole survey so that, some of the items I didn't really understand, she could explain to me. What am I to make of this huge divergence? Is it because it's somehow focusing just on the questions that philosophers disagree on? Or is there something more fundamental that makes philosophers disagree on these topics?

DAVID: Yeah, it's kind of a notorious question, why is there so much disagreement in philosophy. It does seem like at least one big part of the answer is selection effects. Philosophy is basically the too-hard basket of questions where we haven't found ways of compelling agreement. Most of academia got started as philosophy. At one point, the study of space and time was philosophy. At one point, the study of the mind was all philosophy. But Newton found methods for somehow making formal and experimental progress on some of these questions, and getting some kind of agreement. And at that point, we spun off physics, and called that a separate field of its own. Likewise, at one point, people found ways of making progress on some questions of psychology, about the mind. We spun that off and called it psychology. Mind you, none of these methods ever solve all the problems in a field. So what's left in philosophy is the ones that we haven't found a way to agree on yet. In a way, you'd expect there to be a lot of disagreement in philosophy. And the survey, of course, focuses by its nature on some very controversial questions, but it's still interesting why it is that these particular questions are still the subject of so much disagreement. Why haven't we made more progress on these questions to the point where we agree? I think that's just a really interesting question in its own right. I wrote this article called, "Why isn't there more progress in philosophy?" to address this, and yeah, it's an open question. But I think the selection effect is at least a big part of what's going on.

SPENCER: If you go to 20 doctors and they all agree with each other, you don't know that they're right. But if you go to 20 doctors and they all disagree with each other, you know that most of them are wrong. One reason, I think, that looking at expert agreement or disagreement is interesting is because it gives you some sense on, well, can I just trust what that group says? And so I guess the widespread disagreement in philosophy makes me think that perhaps we shouldn't trust philosophers for answers to questions, but they are very good at pointing out flaws in other answers, if that makes sense. I'm curious what you think about that.

DAVID: I think that's reasonable. The existence of all this disagreement shows that, even for a very good philosopher, their first-order views needn't be that strong a guide to reality. But they might have some understanding of the underlying issues, for example, the best framing of the problem, the best options, the best version of every view, certainly views with fatal flaws and so on. I think philosophers are very good at those kinds of higher level issues. Some people think, well, at least we have seven degrees of understanding of the problems, even if we don't have great solutions. I think that's at least part of what's going on. And you're right also, philosophers are very good critics so they can always find problems with a view. And that's one form of progress, if only negative progress.

SPENCER: Would you say that there's more consensus on views that don't work? That there are certain views that pretty much all philosophers would be like, "Yeah, that's fatally flawed"?

DAVID: Yeah, I think it's much easier to find the fatal flaw for a view. Although then what happens is people come up with, "This view doesn't work," but they say, "Okay, well, this nearby view still works." And then they know what form of the view you need. So some form of dualism doesn't work but, okay, here's this other form that does work. So at least we rule out certain areas of logical space gradually, but there's still plenty of space left open.

SPENCER: Do you think that the methods philosophers are using are up to the job? Or do you think we need some kind of new methods to make continued progress on these difficult problems that seem to have been around for hundreds or thousands of years?

DAVID: Well, I think there's always the possibility of new methods. And one way of seeing the history, say, of physics and psychology and economics and linguistics, and so on, is all of these spun off out of philosophy by the use of new methods that were better actually at compelling agreement. I'm always open to new methods in philosophy into spinning off new areas. That said, they're quite often from within philosophy, proposals of radical new methods, 'we should do all philosophy experimentally or empirically or through the analysis of ordinary language.' And typically, those methods are useful for some things that are often inconclusive. But I think the proof is in the pudding. The logical positivists said, "Let's just reduce all philosophy to either empirical questions or questions about the usage of words." And that was an ambitious project and, at the end of the day, most philosophers weren't convinced that this method works. Likewise, today, some people say, "Let's reduce all philosophical problems to Bayesian problems." We'll all just bring it down to priors and evidence, but the trouble is that pushes so many problems into the priors. So it's hard to find the one universal acid that works for all philosophical questions, but I think we can still locally try and work on better methods that compel better agreement, at least on some philosophical questions. And that's how philosophy and practice gradually progresses.

SPENCER: David, thank you so much for coming out. This was a really fun conversation.

DAVID: Well, thanks, Spencer. It's been great talking with you.





Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: