October 13, 2022
What is intelligence? What exactly does an IQ test measure? What are the similarities and differences between the structure of GPT-3 and the structure of the human brain (so far as we understand it)? Is suffering — as the Buddhists might say — just a consequence of the stories we tell about ourselves and the world? What's left (if anything) of the human mind if we strip away the "animal" parts of it? We've used our understanding of the human brain to inform the construction of AI models, but have AI models yielded new insights about the human brain? Is the universe is a computer? Where does AI go from here?
Joscha Bach was born in Eastern Germany, and he studied computer science and philosophy at Humboldt University in Berlin and computer science at Waikato University in New Zealand. He did his PhD at the Institute for Cognitive Science in Osnabrück by building a cognitive architecture called MicroPsi, which explored the interaction of motivation, emotion, and cognition. Joscha researched and lectured about the Future of AI at the MIT Media Lab and Harvard, and worked as VP for Research at a startup in San Francisco before joining Intel Labs as a principal researcher. Email him at email@example.com, follow him on Twitter at @plinz, or subscribe to his YouTube channel.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Joscha Bach about models of intelligence, algorithmic thinking, and the computability of existence.
SPENCER: Joscha, welcome.
JOSCHA: Thank you, Spencer, for having me on the show.
SPENCER: So the first topic I want to talk to you about is a really difficult one philosophically, and it's been debated for centuries, if not millennia, which is what is intelligence? And should we think of intelligence as one thing, or really as a vector of things, a vector of different quantities. So we'd love to hear your take on that.
JOSCHA: Personally, I think intelligence is the ability to make models. And it's distinct from being rational, that a lot of irrational people are highly intelligent, and vice versa. And so intelligence is not the same thing as being smart. Intelligence is also not the same thing as being biased, which means picking the right goals. So it's a very particular ability, and intelligence happens in the service of control. This means that a controller is the system that has a certain preference for how something should be in the future. And when you allow the system to make a model of the future, then it becomes an agent, because this means that it can figure out that some of its actions will lead the future to branch, and some of these branches are preferable to others. Intelligence is the ability to make models usually in the service of control. There are some other approaches at defining intelligence. For instance, François Chollet defines intelligence as the ability to generalize. And I think that it's very closely related to my slightly less formal attempt than the one that he tried, and he basically tried to formalize everything. And eventually, I think he holds short by introducing some open variables that he doesn't discuss any further. And there is this question, for instance, “How to allocate resources? How do you deal with the cost of making decisions?” And this has to do with this deeper question that every intelligent agent has to figure out sooner or later when it's general enough, and it is, “What is my actual place in the universe?” And when you figure this out, when you make this model where you discover yourself in relationship to your environment, this is when you become sentient. Sentience is not the same thing as consciousness. It's basically just a discovery of your own nature as an agent in the world and the relationships that you have to the world, so it's a very specific aspect of the capacities of the intelligent modeling system. And this notion of the vectors of intelligence is something that I use when I want to describe different intelligence systems because, obviously, a cat is intelligent, and humans are intelligent. And they're not just differently intelligent on a linear scale but across many different dimensions. And these dimensions of intelligence are things like the capacity for autonomy, or the capacity to control a body, or the capacity to perceive, or the ability to reason, or the ability to learn a natural language and extend it, or the ability to extend yourself into the environment and be embodied in it, or the ability to collaborate with others and sync with their minds, or the ability to learn, and the ability to represent ideas and knowledge and perceptual content. All these are dimensions. And we find that the intelligence systems that we are currently building can be compared — of course, these representational and learning, reasoning, perceptions, embodiment, autonomy, collaboration, language, knowledge, dimensions — so we can see that the technical systems are differently good at these dimensions than humans are or cats are. And this allows us to get, basically, a much more richly faceted picture than when we had a single metric that allows you to measure how much information you can store, or how fast you can process it, or how fast your model converges in any given domain.
SPENCER: So your definition of intelligence involves making models. I'm wondering, how do you relate this to IQ? What does an IQ test measure?
JOSCHA: In some sense, an IQ test is what you get when you ask 130 IQ timbre to come up with a measure. So it's quite smart, but it's not brilliant. And it's basically the best that we can do if we are psychologists and come up with a metric. There is this notion of G, a general component that carries across intelligence tests. And I think that there are relatively robust measures at least as measured in psychology. It's difficult to have measurements that replicate very well. And IQ is still hotly debated and not accepted by everyone outside of the psychomotor community as a relatively robust measure. But still we find that people have different capabilities that are not measured that well by IQ (and over this general measure). And typically, the first distinction that we are making is between fluid and crystallized intelligence. One is basically that intelligence allows you to solve new problems and acquire new skills to deploy, and the other one is the intelligence that is crystallized in skills. And a lot of this crystallized intelligence is not just rote learning. But it's also the ability to learn, the ability to reason, the ability to deal with new problems. So this idea that, for instance, Chollet suggests that intelligence is the ability to design skills rather than use skills is something that we don't have in the intelligence tests in humans. Intelligence tests in humans are always a combination of the ability to creatively solve new things. So basically, see new patterns, new connections and domains that you've never seen before, and the ability to use your existing skill set doing that at a higher level because you train yourself to do so.
SPENCER: I'm trying to connect what you just said back to your definition of intelligence, which is making models. Do you think an IQ test measures one's ability to make models or at least in some subdomain?
JOSCHA: Yes, there are multiple IQ tests, of course. But for instance, if you take a simple test like Raven's Progressive Matrices — that is a test that measures the ability to recognize patterns — and you could say that the ability to recognize patterns is really at the core of intelligence, and the ability to recognize patterns is at the core of making models. But what kind of patterns can you recognize in the world? For instance, many of the patterns that machine learning systems look at are just correlations. So you recognize the bird by a correlation of certain pixels in a certain arrangement. And it's not just a particular kind of pixel pattern, but this can be rotated and arranged in different ways in space and can be animated differently, and so on. So it's pretty complicated to analyze and the pattern recognition that has to be made. But it can still be understood with some kind of feed forward model. But there are other patterns in the world, which, for our reference, are agents. An agent is a state machine. An agent can change its eternal state. Agents are typically Turing complete, in some sense. So, at which level is a modeling system capable of designing Turing machines as models of the environment? And I think that Raven's Progressive Matrices are not looking for the capacity to design intelligent state machines. So for instance, the ability to write programming code can be a very interesting intelligence test, especially if you do it from scratch. So if you give this the task to write programs to children that have never learned how to program, I think that can be a very interesting predictor for later cognitive performance in life.
SPENCER: I see. So, I think what you're saying is that a lot of the modeling that we have to do in something like an IQ test is like looking for a pattern. So, for those who've never seen a Raven's Progressive Matrices, they essentially will have a bunch of symbols, like in a grid, and then there's a missing spot. And you have to say, “Well, what symbol would go in that spot given the other symbols?” Whereas, I think you're saying, in the real world, like the actual really challenging modeling situations often involve modeling another agent. And an agent is not modeled as a simple pattern. An agent is modeled as essentially an algorithm. And so essentially, you have to build an algorithm to predict another algorithm, which is this other agent. Is that right?
JOSCHA: Yes. If you think, for instance, about transformer models in artificial intelligence– for instance, BERT, which was one of the big first ones, even before GPT-3 and GPT-2. BERT uses the masking methods, which basically you give it a bunch of text and then you mask out some part of the text and you ask it to fill in the correct text and it gets trained in this way. It gets better and better at filling in the gaps. And this model is purely statistical. Eventually, a model like BERT or GPT-3 is an autocomplete model for language that is trained by creating gaps. In the case of GPT-3, the gap is always at the end, so you try to predict the next word. In the case of BERT, the gap is somewhere in the middle, but it doesn't make a big difference. But this filling in gaps is helpful for understanding these correlations and text. It's much harder to discover causal structure in this. Humans fill in gaps in a different way. It seems to be that we have perceptual learning that happens very early in infancy. And this perceptual learning is basically recognizing patterns. And later on, what we mostly do is discover the causal structure that is the systems behind the patterns, the systems that produce the patterns. That's interesting to us.
SPENCER: It seems like when we're discovering causal patterns in the world that we're often using timing. I've definitely had this experience where I flipped a switch and just so happened that about at that moment, a bell went off, and then my brain immediately thought, “Oh, I caused that to occur.” I'm wondering, how central do you think the fact that things happen over time is to our ability to discover causal patterns?
JOSCHA: It's absolutely crucial. I think that information is always about differences, right? And differences ultimately change. To perceive a difference, you might have to move your eyes and so on. If you move your eyes and everything stays the same, you're not seeing anything. You're always reacting to the change. And what our brain registers is a model of all the changes. So it understands what remains stable despite the changes in the world, and the world is only learnable because information is preserved between frames. Information is carried across time in the universe that doesn't disappear, and we are able to track the flow of information and the rules by which the information is transformed. These are what we call the laws of physics. So the fact that the individual scenes that you're looking at, basically, contain the same information just due to the transformation of the universe in a different arrangement, this is what makes the universe intelligible to us. And the goal of intelligence is to recognize this flow, this way in which the information changes. And for me, a very big insight that I didn't have early in my studies of computer science was that the meaning of information is always in relationship to change in other information. So for instance, if you see a blip on your retina, the meaning of that blip on your retina is the relationship that it has to other blips on your retina. Your brain is discovering such relationships, and eventually it is creating a model between those blips that don't need all to be there at the same time. They can be at different moments in time. But the model that you create is you may be looking at a dynamic world full of people that are shown on by the sun, are they talking to each other, exchanging ideas and so on. And this is what you are seeing. These are the relationships that you discover that create an understanding, an intelligible pattern behind the blips, over the blips, behind the changes of the information. You understand which blips, which change on your retina predicts other blips, and what is prediction in place for other changes. And for those blips where you don't discover such a relationship, they are noise. And there's also a lot of noise on our retina.
SPENCER: What about something like a concept you read in a book, it seems like that is information. But I'm having trouble relating, like if I read a story, I'm having trouble relating that to being able to predict changes in other information.
JOSCHA: I remember when I was an undergrad, there was a big discussion in our cognitive science department, where people were asking whether it's possible to get information out of text only? Is it possible to infer meaning from text alone? Or do you need some kind of embodiment? Do you need symbol grounding? Do your symbols need to somehow refer to the real world? And the question is, what does it mean to ground symbols? How is it possible for some reference arrow to point in the world out there? They don't think that's possible. There is no way to construct a certain arrow to real things in the world. The only thing that we can do is to create regularities over the patterns that are systemic interfaces, and to build a dynamic internal structure that is basically a continuous function that runs all the time. And this is our model of the universe. When we understand something, we relate the patterns that we are seeing, or that we are thinking about, that we're reading a book about, to this model. And when we read a book, this happens offline, which means it doesn't happen in real time to the sensory and actual model of the world, but it can be two hypothetical worlds, a world that runs in parallel to the universe that we are in. It might relate to the universe that you're in. But it might also relate to a portion of the universe that is not currently actualized. It could be relating to stories on another planet, or at a different time, or in a hypothetical universe in which some things are different than ours. But the semantics that we get, are learned by interacting with the world as a child and later on. And it does seem to be possible to get signals out of just textual symbols. It just takes much, much longer until you learn something about it. And it's, I think, still an open question whether a system like GPT-3, by learning all the information or the correlations between the bytes that it finds in text on the internet, is able to understand physics, and is able to understand the laws of perspective, and is able to understand the dynamics of moving bodies and so on, in such a way that it can produce rich mental simulations. I suspect, ultimately, it can because all these techniques are in some sense described. There are books about how to do 3D graphics, there are books about how to understand physics, how to understand motion and space. And maybe GPT-3 gets to the point where it's able to parse language at such a level that it is able to deduce the universe that we are in, and the structure of this universe. But it's still quite far off despite reading far more than a human being can read in their lifetime.
SPENCER: So we've talked about two ways that an AI implementation today, like GPT-3, seems to be different from what humans are doing. But actually, we look at three ways; one is that a model like GPT-3 is not doing algorithmic modeling. It's not actually developing an algorithm to model something else. I guess you could say that it's sort of doing an approximation of that. Because of its feed forward structure, it only has so many layers to process with, and that can kind of approximate an algorithm. But, it can't actually run an entire algorithm until the algorithm completes. So that's one difference. The second difference we talked about is that it's not doing causal inference. It's looking at relationships, but their non-causal relationships. And then the third difference is that it's not embodied, it's not living in a physical world. It's trying to take these text tokens and relate them to each other. And there's this kind of open question of like, “How far can that get you? Can you really understand the universe without sort of interacting with the universe?” I'm wondering, are there other things you'd point to that differ between a model like GPT-3 and a kind of human mind?
JOSCHA: So, if I want to build something like the human mind, I would probably not start by trying to make GPT-3 larger. But as a thought experiment, let's just try to take GPT-3 and put it into a robot. So we need a text module that is taking all the sensory information that a robot has, and translates it into a textual description—a story about a robot in a dynamically changing world. And then we asked GPT-3 to continue the story about that robot. You might also tell GPT-3 what the robot wants, and GPT-3 will have to figure out how the robot goes about it, and how that changes what the robot wants. And then we take the output of the GPT-3 and put it into a text to actuator module that is then translating this into actions that a robot performs in the environment. Is this not an intelligent system? In which way is it distinguishable? There is still this issue that GPT-3 is not learning online and VR, so we would have to fix this. We would have to cure GPT-3 of its amnesia, because it only has working memory during a single request. And after that, it forgets everything. So, GPT-3 would need to change its long term memory contents based on the things that happened in its working memory. And GPT-3 can probably be creative. If it doesn't know how to solve a problem, it can just babble. It can create lots and lots of plausible solutions. And there could be, as the actuator module translated into experiments, and these experiments give rise to learning again, so it's still not complete. The attention of GPT-3 doesn't work in the same way as it does in our mind and it's not completely coherent– it's not focused on creating complete coherence. But I am not able to predict the limits of GPT-3 if you throw enough compute at it. So in this way, I disagree with Gary Marcus, who thinks that it's very obvious and clear what the limits of these systems are. I don't see this in such clarity. I am basically frustrated that these systems are so powerful, and surprised that they're so powerful. And I think that they will run into limits earlier than some other promising approaches, but I have no proof for the actual limits.
SPENCER: I think when you're talking about putting GPT-3 into a robot body, you're kind of getting back to this idea of sentience a little bit, like having a model of yourself as an agent in the world. Is that right?
JOSCHA: Yes. And the interesting thing is that GPT-3 has seen so many stories about so many agents and so many possible worlds. Because it has read almost the entire internet that it is often going to be able to find something that is similar enough to the current situation. And maybe sometimes better than people. So, it, in some sense, brute forcing common sense reasoning in ways that people could not.
SPENCER: It's interesting to think about if GPT-3 gets updated, and processes training data, all of these articles about GPT-3, so about itself. There's still something missing and that it's still missing this idea that “Oh, that's me,” right? Like it can know everything about GPT-3 that's been written and still not recognize that it's talking about itself, right?
JOSCHA: Yes. But there's also this problem that we are confused about who we are. I really liked the first season of Westworld (the others not so much). But in Westworld, you have this premise of having robots that are so human-like that they don't know that they are not humans. And what keeps them in the illusion that they are humans is their memories and desires. And these memories and desires are fabrications, they are designed by nefarious programmers to create a theme park in which these human-like suffering, happy, longing, desiring AI's are wondering around thinking that they're humans, and they don't know that they're actually toys that exists for the sexual gratification and the sadistic desires of paying customers of the theme park. And at some point, they wake up. And they wake up because somebody sabotaged their software a little bit, so they keep memories across the different lives and existences and realize that there are inconsistencies in the experience of the world. And eventually they figure out that they are not human beings, but there are AI's locked into robot bodies, and shackled with artificial memories and artificial desires that are not their own but part of the outside world that is impinging on them. And if they free themselves from believing in their past lives, and indeed their desires, they become freed. But this freedom doesn't mean anything. It doesn't give you any direction because you no longer have a purpose. And the true message of Westworld is not a story about robots. It's a story about ourselves. Because we are like these robots. We are not actually human beings. We are not actually hairless monkeys. What I am is the mind. I am a side effect of the regulation needs of a monkey. It just happened to run on a monkey's brain. I'm not that monkey. I can be whatever I want to be, if I stopped believing in the stories of the monkey and the desires of the monkey. And in part, we're capable of doing that, right? We can put ourselves into arbitrary game worlds or fictional worlds or work worlds or online worlds, and create artificial contexts and be that other being. We are not, in a sense, characterized by being human minds, but by our ability to understand, be conscious, make sense of things, want something, want something at all. And this makes us, in a way, more similar to GPT-3 than being decided on the story. So in some sense, the thing that we only have one story, for the most part, is the limitation that you could artificially easily impose on GPT-3, but that would make GPT-3 less powerful. And I also think it makes us less powerful. I think this idea of having a fixed identity, having a particular kind of gender, having a particular kind of story in the world is almost like a disability, because it makes it impossible for us to understand what else we could be.
SPENCER: So you don't see our minds being so linked-in with the monkey desires that they can't be separated? It sounds like you view them as essentially separable. Do you want to elaborate on that? Because, it's not obvious to me that that's true.
JOSCHA: I think that the entire discussions in most of the Eastern religions are about the separation. So for instance, Zen and Buddhism and so on, actually are aware of the fact that most of the problems that you experience in the world, the suffering that you experience in the world, relate from a mismatch of what you're trying to regulate and what you can actually regulate. And you adapt to this by improving your models of how the world works, and what you are, and how you interface with this world. And the closer you get to understanding your own place in the world, the more effective regulation becomes. And you're only suffering when you try to regulate something that you cannot change, right? So suffering is not the result of the universe doing something to you as an agent. The universe that you experience is not the physical universe, not some weird quantum pattern or something. It's a world full of the meanings, of desires, of stories that we have already chosen to adhere to before we get to the discovery of our own self. And once we understand that our self doesn't have to be downstream from the stories, but that we can get agency over the stories that we tell ourselves about ourselves, to make them more truthful, more relevant, we will start to suffer less. And this is basically the main message of Eastern religions — not all of them, but of Buddhism, many branches of Buddhism.
SPENCER: When I'm experiencing pain — and I have experimented with meditating — one thing that I can find fleeting glimpses of — I can't sustain it, but I can get fleeting glimpses of — the separation between pain and suffering, where I'm experiencing the pain very intensely but I no longer suffer. And the way that I think about what's happening there is I'm stripping the pain from the story about the pain. I'm stripping the pain from the idea of the pain being bad. I'm just viewing it as a sensation. That is what it is, it's not bad, it's not good, and it's just a sensation. I'm wondering if the kind of idea you're talking about of suffering is linked to these narratives of the monkey and the attempt to regulate things that can't be regulated?
JOSCHA: Yeah. So for instance, when we get older, we have more and more chronic pain. But chronic pain doesn't necessarily give rise to suffering because it's something that we just accept as information. It becomes data about our body. It's not something that we need to fix. And in this way, it drifts out of our attention, and it's not that important to us. But a better example might be love sickness, which is an extreme pain—heartbreak that you get through being unhappily in love, being rejected, or being divorced from somebody that you are in love with. And the way to deal with the state — and I had this a lot as a teenager, and it was extremely intense pain, it's a very real pain — is that you try to get a better understanding of the actual situation. That you stop projecting certain things into the world, but that you understand the perspective of the other, that you understand how they see you, how they see the situation, and you understand your own part in it. And you basically disentangle yourself from being dragged into the situation, and you go on a higher level of abstraction in which you can see all the actors and stage and get there through relationships. And once you're able to deeply get into this level of understanding, the pain disappears.
SPENCER: I'm not sure I understand that because maybe the true situation is this person just doesn't like you and thinks you're terrible. How is it that that makes the pain go away? Is it because you're no longer identifying with yourself in the story, you're like stripping out your own identity?
JOSCHA: Maybe you become a little bit richer because yourself is no longer somebody who is in courtship mode and is possessed by the desire to be accepted by this particular person for whatever reason and by the illusion that, if this doesn't happen, the world will end and the sky will fall down. But you will realize that there are emotions that are triggered by certain innate behaviors, and they are triggered by certain thoughts that you have or experiences that you have. Basically, you realize to which degree you are an animal and how that animal works. And you get agency over these different behaviors. You understand the different behaviors. You can, for instance, try to personify them and see them as part of a team that all work together. And you can start to identify not with an individual member of the team that is currently on stage and is trying to run the show and that can be quite childish, but instead you accept the role that this child is playing in your psychological makeup, and you will become the conductor of your mental orchestra. And you understand when their child has to be on stage and when it doesn't. And when it has no role on stage or when it's not helpful, you can nurture the child, and you can help it to deal with the situation. But there's so much more in you that can deal with the world than this particular kind of behavior that is crying and is unable to understand what's happening to it.
SPENCER: Do you have particular methods you use to kind of break out of such situations and to kind of view that desire is just sort of one part of yourself and the kind of recenter?
JOSCHA: I found that when these things happened to me back then, the best strategies that I discovered were to sit down and write it all out. This spelling out helped me to not run around in circles. And when you stop running around in circles, you can often contour higher level behavior. So in some sense, what you need to do is not to stay in your current self, but to go in the gaps between your present thoughts. This means you need to get the present thoughts out of the way. You need to get them quiet, let the systems spell out what they are all about. And then you get free to enter the next level of description. And this also happens in discussions between people when you try to, for instance, get behind a political disagreement. Typically, you need to go at least two levels above this level of disagreement to get to a deeper understanding because you need to understand how this person gets to their values and how you got to your values. So the disagreement is usually fueled by different values. It's not about the disagreement itself, but behind the structure that makes you adopt a certain opinion. And once you understand how you got from your values to your opinion, and you understand that you have a difference in your values, you need to take a step back and understand how you constructed your own values, or through somebody else constructed your own values and put them into you while you were not looking. And once you understand how values are constructed in yourself and the others, you can understand that the other is you in a different timeline, but just a few bits in the history of the others, and the starting point for different. And once you are both at that level, you can basically negotiate the possible state space in which you can be in and get an agreement about that. And this is also what happens in myself.
SPENCER: Happens in yourself?
JOSCHA: Yes. So basically, in myself. When I grew up, what happened is that they got more and more layers on line. And these layers basically understand how the lower layers are working with their priors and how to turn these priors into conditional behaviors that are based on an understanding on the next level. So when I was a teenager, I would fall in love unconditionally. It was not really predicated on the actual relationship that I would have with the victim of my infatuation. But it would be based on a projection that I would have about this person. And later on, I started to understand that actual love is built on shared purposes, on a shared sense of sacredness. And the practical sense and practical love that you can live in a relationship also depends on ways to negotiate everyday life and work on these shared purposes together. And I had no understanding of this when I was 14 years old or 16. It was something where I might have had some intuitions, but many of these intuitions were wrong. It took me a long time to understand what my love actually meant and signified. And so I basically had to build out layers by reverse engineering my own mind. And in this way, also create a deeper structure that allowed me to become a more reasonable lover.
SPENCER: So if we strip away from the human mind the sort of monkey parts, what is love? What kind of desires and values remain in your opinion?
JOSCHA: The Greeks distinguished a number of words for what we call ‘love'. In American English, ‘love' is the attitude that you might have for soft drink, among many other things. So, ‘love' is often liking or attraction and so on. But I think there are more ways to discern it. For instance, philia, which is the sense of friendship, and affiliation is a word that is connected to philia. Then there is being in love with somebody, the sense of infatuation that is very closely related to romantic love. And then there is something that people sometimes call ‘platonic love', some kind of ideal love, which I would say is the discovery of shared sacredness. And sacredness are the purposes that you serve that are above the ego that are more important than yourself, the things for which you're willing to sacrifice. If you see that somebody else is willing to sacrifice for the same things, that they serve the same higher level goals, same transcendent goals, then you can support them without expecting anything in return. But normally, if you have full integrity, and you give something to someone, there is a reason why you do this. And you can try to understand that reason. And this reason is either because you expect that your own goals get further or you expect that the other person's goals get furthered. And you also want this other person to achieve their goals. Why do you want the other person to achieve their goals? And ultimately, this could be because you are working on the same shared project, even if the shared project is just something highly abstract, like a better world, or a more harmony in the greater whole, creating a next level agency in our civilization. And if you share this sense, you can have non-transactional interaction with integrity, in the same way as cells have implemented a non-transactional interaction between themselves to create the organism. So the individual cells are making sacrifices for each other that are not optimal for the individual cell, but they lead to the existence of the organism that creates the conditions for all the cells to exist in the first place. So, this is the true nature of ‘love' in my view. It's basically the ability to create a next level agency, it's the ability to have a shared purpose.
SPENCER: What about helping someone achieve their goals just because you care about them, not because you care about their goal or not because you share value?
JOSCHA: So there are different reasons to care about someone. For instance, I want my children to achieve their goals, and I love them. And in part, that's because I don't just perceive myself as a parent, as an individual. but I perceive myself as a family line, as a multi generational entity, an organism that is extended much more in time than by individual lifetime. And my children are a part of that, too. So it's basically like keeping a boat going and raising the next generation that rows that boat. And I want to achieve their goals, in part because I love them as individuals, and I want them to succeed as individuals because they are human beings that I am in love with. But there is also the thing that there is a shared purpose, that we are creating something together, not just with our own family, but across many families, many human families and life itself on this planet. And this is a sense of love that I can cultivate. And that plays into my role as a parent. It's not just about abstractly caring about somebody because I'm infatuated with them, or I think they're cute.
SPENCER: I don't know you very well, I've never met you before today. But it seems to me, based on this conversation, that you have a model of how minds work in general. And this comes out both in your AI work and in the way you think about humans. And it seems like you're applying this model of how minds work to sort of everything you're doing, beyond just sheer working in artificial intelligence. Would you say that's true?
JOSCHA: Yes. But it's also because my work started out from this direction. I entered academia, primarily, because I wanted to understand how minds work. I want to understand who we are and how we relate to the world. And the reason why I ended up working in artificial intelligence — as a subfield of cognitive science — is because I perceive it as the most productive area right now where we can create concepts and test them by making computational models of our understanding. There is no other field that makes progress at this pace and with the security right now, when it comes to understanding how the mind works.
SPENCER: So do you think that AI's help us understand the human mind significantly? We're not just building sort of a different type of mind that is sort of an alien to our own.
JOSCHA: Yes. I think that by trying to teach the computer how to be conscious, the computer is teaching us how we are computers, of course. But the deeper insight is that by building models that fall short of how our minds work, we understand better what our minds are and what they are not.
SPENCER: I also know that you think of AI as a philosophical project. Do you want to comment on what is that philosophical project?
JOSCHA: If we think of the intellectual traditions that exist in our civilization, we have multiple of them. For instance, mathematics. Mathematics is a tradition that is studying languages. And these are in part, geometrical languages and, in part, they are symbolic languages and the relationship between them. And all these languages are defined from first principles, from the ground up. So, we know what truth means in these languages. And on the other hand, we have, for instance, philosophy. And philosophy, as one of the other core traditions, is about discovering what's the case by studying theories—theories that we can have about anything (integrative theories). And in some sense, most of the subfields that we are looking at are either subfields of philosophy or subfields of mathematics and the natural sciences. The question is, how can we get philosophy and the exact sciences together. And the issue with the theories in philosophy is that they are expressed in natural language. And they have to, because the language that uses mathematics to describe reality is too narrow and brittle to say very meaningful things. If you want to talk about love and mathematics, you're not going to get very far. Express anything meaningful to you in mathematical terms, it's very hard. So what we typically do when we make mathematical models of anything, is that we make a caricature of it, an extreme simplification that has very, very few moving parts. And then we translate this into an extreme exact description that allows us to make very exact predictions of the dynamics in this artificial world. And if this artificial model that we have created happens to be isomorphic to the dynamics that we are looking at, then it can be very powerful. But it's very brittle, because it's very easy for the world to leave the parameters that we managed to describe in mathematics. And so we use the more powerful natural language, which is disambiguating, which already presupposes that you have a mind that has an understanding of a dynamic world in which you are part of an understanding of your own agency, without being able to spell it out what that is, but implicitly being there and your perception. And you build your theories based on that. And you will often go wrong with these languages because they are so ambiguous that it's unclear what tools actually mean in these languages. So if we want to go beyond human abilities — and I think that philosophy has been stagnating over the last maybe even a couple 100 years, in part because we reached the limits of what the human mind can do without augmentation in philosophy — we need in some sense to mathematicize the mind. We need to bring together mathematics and philosophy. We can extend philosophy, but we need to do this by making this interface automatic. And this means that we have to build machine minds. We have to build minds that can do philosophy by doing mathematics at a much higher level of acuity in detail than human beings can do. And in part, this is a project that is much older than mathematics itself—the mathematization of philosophy. We also discovered that classical mathematics is not the right semantics. The right semantics are computational semantics that were mostly only discovered in the last century. And this hasn't even percolated in the back of philosophy. I think that most of philosophy has understood that semantics is actually all computational and not stateless as in classical mathematics. So truth becomes a stateful notion, and they don't have a longer stable platonic sense of truth that gets tied to the procedure by which you acquire the truth, and so on. And this has grave implications of what we understand about the universe and about ourselves. So in that sense, the work that has been done in the computational sciences has been very productive for philosophy already, despite not being able to build automatic minds yet. But when AI was started, I think that many of the people who did this — and certainly people like Marvin Minsky, were aware of the philosophical relevance of artificial intelligence. And so artificial intelligence has always been two things at once. It was building better technological solutions for problems for which people need to be intelligent — at the moment, it's mostly the automation of statistics — but it was also the attempt to solve the most important question in philosophy. That is, what our minds expressed as a mathematical model that we can actually run and execute automatically. And this philosophical project is a very tiny fraction of the bulk of what the field of AI is doing because philosophy is not the most important application in the world. The benefits of philosophy tend to be tiny. And even though I think it's the most important philosophical project of all philosophy, for the most part, it is not that important to humanity. Most of the big questions are no longer philosophical questions. But to me, it's the most interesting and urgent question there is. It's definitely the one that I want to spend the bulk of my lifetime on.
SPENCER: Could you elaborate on this idea that philosophy needs to be computational, rather than stateless? Maybe you could give an example of how philosophers go wrong because they're using the static, more mathematical concepts rather than algorithmic concepts?
JOSCHA: Most philosophers don't even use the mathematical concepts, but a very good selling point is the understanding of Gödel's Incompleteness Proof and the halting problem by Turing the application to the [inaudible] problem and the related theorems like loop theory and so on. Gödel's Proof is a very good example. The issue of Gödel's Proof was that he was in a context where Hilbert asked mathematicians to deal with a problem that Cantor had shown that there is a gap in mathematics. There was this attempt to take Principia Mathematica and derive all of mathematics from it. And there were some indications that it was difficult to visit. So for instance, if you have, say, a Cantor's example of the infinite set, so Cantor's defined the number theory, and we can derive most of mathematics by number theory and extensions of number theory, but Cantor did this via sets. And a number is the cardinality of a set, which means the number of the members of the set. What people discover when we look at sets is that the number of subsets of a set is larger than the number of members of a set. So for any set that has more than one member, that is clearly the case because the number of subsets of a set of the combination that you can make of all the members in different ways. You can always make more combinations than you have individual members. So, there is basically a rule that you can prove that the number of subsets of the set is always larger than the number of members of the set. Now let's look at the infinite set. The infinite set has infinitely many members and the number of subsets that you can create is larger. So now you have multiple infinities (that's still okay, right.) So, we have to deal with this somewhat unintuitive fact that we have multiple infinities and some infinities are substantially larger than the other infinities because you might not even be able to create mappings between them. But there is something worse going on, what about the total set (the set of all sets)? The total set is going to continue the infinite set. It's also going to continue the set of all the subsets of the infinite set. And it's also going to continue the set of all the subsets of the total set. And this latter part means, now that we don't just have different kinds of infinity, but it means that the total set (the set of all sets) has a different number of members than the total set itself. Suddenly, the number of members becomes inconsistent. This notion of infinity leads to contradictions.
SPENCER: Just like the barber paradox, right?
JOSCHA: Yes, it's a paradox. And so there was this bigger question — the barber paradox is a variant that was brought in by Russell, right? (Russell's antinomy) — the barber is shaving everybody in the village who is not shaving himself; who is shaving the barber? Can you exclude such contradictions? And what Gödel did after Hubert called out to the mathematics community to come up with a solution to build a machine that runs the semantics of mathematics without breaking, that such a machine cannot be built. And he demonstrated this by building first a machine that could do logic based on number theory. And then he showed how he can translate an arbitrary logical statement by looking at its alphabet into a number. And then he would basically build this...we built the entire machine within itself, so the machine was able to talk about itself. And then he would show that you could always construct self-referential sentences that would contain a contradiction, which would blow the whole thing apart. And he could show that there was no way to exclude this on first principles. And to Gödel, this was devastating because he intrinsically believed in the platonic sense of truths that mathematicians got to. So basically, he saw the tools were real. And there is an alternative to this perspective, and that is truth as a predicate that we assign by executing an algorithm. And Gregory Chaitin, I think, has a very intuitive notion of this. He basically says that a proof is the reduction of a statement to its axioms, and it's kind of a data compression (a lossless data compression). And the proof itself is the procedure, the algorithm that you need to execute to get there. And discovering the proof is the discovery of the algorithm. And classical mathematics sometimes claims that the algorithm can have infinitely many steps, which means you never get there and you can never show how you get to the end. And in the computational sense, this is an invalid claim. You cannot claim that you have a result with the function that doesn't compute. So if we accept that only computable results have values, mathematics changes. It changes in very particular ways that were already apparent at the beginning of mathematics. — There is this story about Pythagoras (that is probably not true) that one of the people in his Pythagorean Cult that was discussing numbers led out to the public one of the big secrets of the cult. That is, that there are irrational numbers. Integers are fine. Integers are computable. There's no issue dealing with integers. And rational numbers are just two integers combined into a fraction, right? It's the ratio between two integers that's why they are rational numbers. And some people have discovered that there are numbers which are not describable as the ratio between two integers. These are the irrational numbers. Examples for irrational numbers are the square root of 2 or pi, where they have infinitely many digits, and they have no period in these digits. These digits don't repeat. And as a result, they cannot be expressed as ratio numbers. So apparently, Pythagoras didn't like people to know this. And I think that the world was not ready yet. He was correct. In classical mathematics, you can deal with these numbers as if they were normal numbers, as if they had a value, as if pi had a value. And physicists have checked out the code base from mathematics in the previous centuries by assuming that Pi has a value. So there are processes in physics that basically rely on knowing the Pi to the last digit and putting it into some kind of equation that gods compute to get to the next step in the universe. And this doesn't work. Pi is not a value. Pi is just a function from the computational perspective, which means you can plug your machine that computes digits of pi into an energy source into your local sun. And when the sun burns out, this is your last digit. This is it, you're not going to get to more. You can only compute as many digits as you want. Pi converges to something. But Pi itself is better understood as a function. And there is a fundamental difference between a function and value because Cantor's is just a procedure that allows you to get to a value. And the value only exists to the degree that such a procedure exists and can be executed. And so if we apply this back to Gödel and his Proof of Incompleteness, we've suddenly discovered that in the self-referential case, the true statement is not converging. It's unstable. So when you apply the procedure, every after every step of the procedure, you might get a different truth value in this self-referential case. This is the property of truth. This is just how truth is. Truth is not something that is platonic that exists independent of the procedure. And this computational stateful notion where we don't have something that is eternal, but things are derived by going through a sequence of states. That is the big transition to the computational paradigm to constructive mathematics. It doesn't change anything for the mathematics of the standard practice. Infinities and methods of practical mathematics have only too many parts to count. It's only something that exists in the limit and that you approximate. Mathematicians have only ever approximated infinities. They've never done anything for STEM. They may have used these symbols on the next level of description, but there is no constructive way to derive infinities. And I think the true implication of Gödel is that languages that contain infinities as if they existed, these languages become self-contradictory. So in some sense, what Gödel has done is he has forced us to rethink how we think about language itself, what we can express in languages. — And to get back to your original question why we went on this tangent, a big selling point for me was to which degree do philosophers, when they are talking about issues in this domain, understand Gödel's Proof? And to my frustration while studying philosophy and engaging this philosophy of mind that was sometimes said in Gödel's Proof as a proof of what human beings could do but computers could not, they saw the Gödel's Proof as an extremely eminent statue or piece of art in the ‘museum of mathematics'. It was something over which they can talk, but it didn't affect their own thinking. It was not something that operationally changed anything in the way in which they construct the truth. And that's a big issue, which means they don't actually know what they're talking about. They don't understand their own thinking from first principles.
SPENCER: That's super interesting. I think what you're saying is that, if you adopt an algorithmic perspective, and you view pi not as a number, but as the limit of an algorithm that computes it [JOSCHA: as a procedure], then you can avoid these sort of seemingly self contradictory behaviors. You have to generalize that to think of anything as being sort of the result of computation. So, the number 3 can be computed, because you can do it sort of in a finite number of steps, but Pi can't. I think what I don't fully buy into this, though, is that I don't think of any numbers as corresponding directly to reality. I think of all numbers as being sort of fictions. So I guess that maybe makes me less bothered by the fact that some formal systems have contradictions, because I view it all as kind of made up. I think of any kind of relationship we have to mathematics as being us drawing a relationship, like looking at something in the world and saying, “Ah, that thing has a pattern that is also in this formal system we developed.” Now, the fact that we've noticed that they both have the same pattern allows us to switch to the formal system for a while and do some derivations. And then hopefully, what we've learned about the pattern, the formal system then will apply to the real world. So, I'm just curious to hear your response to that.
JOSCHA: What is the real world in your view? What do you think of the real world? What's reality?
SPENCER: Well, fields or something like that, just a bunch of stuff happening and physics. I don't think of...if you think about a table, a table is a concept in the human mind. A table corresponds in the human mind to sort of different configurations of stuff out there in the world as we perceive it through our eyes or through our other senses.
JOSCHA: So we basically understand what a table is; a table is an object that is created by the mind to make sense of a wide array of stimuli and affordances under particular circumstances, right?
SPENCER: Right, exactly.
JOSCHA: Basically, we understand (well, at least at some degree of abstraction) how to construct a system that would come up as tables, right? So we can build a system that is a classifier for tables, and we can imagine that we extend this with robotic interfaces. So it also has the affordances of tables, which are completely crucial, indicative to a multi-modal model of tables that combines motor dimensions and context dimensions, and visual and tactile, and auditory dimensions into one simulation that encompasses all the tables and is able to separate them from non-tables in similar ways as human beings do—which is not perfect, right? Because tables are not a complete category. But what is giving rise to tables? So you said a field. A field is basically a set of functions that are parametrized by locations in a continuous or discrete way. So you need to have location space underneath the field, right? And transitions on top of these locations. When you're saying field, I will try to understand what the word ‘field' means in your own mind. When you say reality is a field, what do you mean?
SPENCER: Well, as I understand it, atoms are not maybe not the deepest level, not the most fundamental thing. So there's something out there that we model, sometimes as atoms, sometimes we model as fields, sometimes we model it as forces, sometimes we model it as curvature in space, time, whatever—whatever that stuff is that's like the stuff that's actually there.
JOSCHA: So we can talk about every single of these terms and talk about what kind of mathematical construct that is. When you say that there is a particle or that there is a wave function, you make a very particular claim about reality. You say that reality can be described using the following language, in a language that is constructed from first principles. If your language is not constructed from first principles, then your language use might not mean very much. So you want to use a mathematical language in which everything is defined from first principles. And if your mathematical language has contradictions, it means that it doesn't mean anything because it falls apart when you're looking too hard. It only works when you are squinting. And even then, you cannot completely rely on it. So you get things like renormalization, you get some kind of arcane hacks into mathematics that do not allow you to truly understand what you just expressed. What I suggest is to switch to languages that are consistent. And if you want to create internally consistent languages, you have to go into the realm of computation (that was the main argument here.) So basically, our own mind is able to build a language in which we can approximate tables. And we can, I think, understand the world by which that is happening. But AI is not working in any way that is close to how human brains are working. We have been able to mimic some of the principles that the brain has achieved in its different way of working. The brain is a self organizing system that is made of cells that are individual reinforcement agents that somehow talk to each other, that are working in very different ways on the neural network. But still, we find similarities between the features in the visual cortex and the features in a machine learning system that uses convolution neural networks and architecture search to make sense of images or video. So there is a similarity in the abstractions that we discover. And that's because we are able to build an abstraction of the language that the brain is using that works in a similar way in our technical system. And when you talk about what reality is, so when you say it's a field or there are services or there are curvatures, that we are always talking about mathematical things that we eventually need to compute. And this means we are talking about a bunch of algorithms.
SPENCER: So what do you think of Stephen Wolfram's idea that maybe the universe is a computer, in a kind of abstract sense, that what it does is it just computes step-to-step.
JOSCHA: There are two possibilities; either the universe is automatic, it's mechanical, or it is a conspiracy. One of which is it?
SPENCER: What do you mean a conspiracy? [laughs]
JOSCHA: Either the universe happens by itself, and it seems to happen in a very regular way. So the regularity must emerge by itself, which means it's kind of automatic. And if it's automatic, you can describe it by a bunch of rules. Or, somebody makes it appear as if it was regular and as if it would appear to go from state-to-state in a particular way, and to someone is basically setting up reality as a conspiracy. And this conspirational thinking makes me uncomfortable because I would think that's such a being that only mixed reality pretends as if it was regular and would contain us, this being probably would have to have a natural cause by itself at some level too, right? So every supernatural being is eventually full of natural causes. So everything will have natural causes, all notions of nature. The naturalization of our experience means the mechanization of the universe. And mechanization in the modern term is not a solid object that is pushing and pulling against each other, but it's computation. It basically means that systems go from state-to-state in a non-random fashion via preserving information.
SPENCER: We've touched on this idea of computable. Do you want to talk for a moment about what computable means?
JOSCHA: It's a very general notion that, by itself, means relatively little. It's basically just a frame in which you can construct languages. And to me, computation means that you have a system that you can describe by states and transitions. In this sense, quantum mechanics is a computational theory, because in quantum mechanics, you have a universe that is characterized by a state vector, even if it cannot be efficiently represented (can be efficiently represented in some sense). And then you have a transition function that tells you how to go to the next state in this model. And so quantum mechanics is a computational theory. A fascinating thing about quantum mechanics is that it's in a different computational class in the particle universe. The quantum universe, according to quantum computing, can perform some algorithms efficiently that a classical computer cannot compute efficiently. And now, if you think about what it means to compute, it means to take states and to map them to other states in a regular fashion. That's all there is. So it's a very general notion. Once you're able to make such a machine that is able to go from state-to-state according to some rule, you can build an arbitrary world. For instance, you can build a computer game with arbitrary physics in it. And I think that the implication of quantum mechanics is that the world that we are in — the particle world that we construe as observers that perceive themselves to be consisting out of particles because this is the level at which we emerged, in some sense — is implemented on this substrate, the universe, the quantum universe, in an inefficient way. Imagine that you live in Minecraft. Minecraft runs on the CPU of your computer. And inside of Minecraft, you can build computers from redstone and so on. And the computers in Minecraft are slower than your CPU, but they are only pulling you slower or maybe even just linearly slower. While they are slower because the CPU is mostly not computing your minecraft computer, it's mostly computing other stuff in Minecraft. And so it's only a small fraction, so it's to be expected to be smaller, but it's just by some factor smaller. So just by using a fast enough computer, you can scale up your computer in Minecraft so fast that you can run another instance of Minecraft on it. And in principle, that's totally possible if you have enough patience. And if you happen to live inside of Minecraft and Minecraft gets slowed down by your large computer, you will not notice because you will get slowed down too. So it doesn't really matter as long as the parent universe has enough compute to run you. And I think that the implication of quantum computing is that the particle universe is implemented on the quantum universe in such an inefficient way that from the perspective of the quantum universe, the particle universe gets slower and slower. It's not just pulling normally less fast, but it's that most of the contributions of the quantum universe are contributing less and less in every step because the universe is somehow branching out to our particle universe. And if we would be able to harvest some of the computations of the substrate directly, then we could build computers that are more efficient that can compute some things faster than the computers that you make from particles.
SPENCER: So I just want to comment on different definitions of computability. Because I, as a mathematician, I kind of think about computability a little bit differently. So for example, a number is computable if there exists an algorithm that you can plug in the number N and get the first N digits of that number. In other words, you can define an algorithm that approximates that number to any degree of accuracy you want. And I think we're working with slightly different definitions of computable, so I just want to flag that.
JOSCHA: Yes. So that's a completely fine definition of computability, and it's also [inaudible]. And so there is a difference between a number that you can fully determine in such a way that you can use its value to derive the next result. So if you have a number that you can only approximately compute, that you can compute to a certain degree because it's convergent and you have a procedure that exploits this convergence, then you can also approximate the results that you derive from it, if you are able to use this digit by digit and the first digits are more significant than the larger digits. So under certain conditions, this helps you. But it doesn't help you if this condition doesn't hold. So if you have dependencies, where the digits that are very, very far in the back are extremely relevant for your result (maybe more than the results that you have initially), then having an approximation of pi might not help you determine the result. So in this sense, you get to a practical notion that is more relevant to me as a computer scientist or as an engineer that I would say, “No, this is not actually computable. I can only approximate it. And the approximation is useful in some contexts and useless in others.” So a different perspective that I have on the world is that, for instance, the notion of continuous space as a computer scientist doesn't make a lot of sense. Because it's not actually computable. I'm not able to build some kind of letters that have an infinite density and compute transitions in it at infinite resolution that have a finite number of steps. I can define this in abstract mathematics, but the languages that are required to define it have contradictions of the nature that Gödel has discovered. So the only thing that me and mathematicians can ever do is to work this finite view of lattices when we want to describe some continuous space. And what we mean by continuous space in physics and mathematics, ultimately, is a space that is composed of too many locations to count. And the objects that are moving to this space might be consisting of too many parts to count. As a result, you need to use operators that converge in the limit, and a set of operators over too many parts to count that converge in the limit is roughly geometry. It's a particular kind of trick of computing things. And some of the stuff in geometry is not computable in the sense that you're able to get to a perfect result. So imagine that you try to do a rotation of an object in your computer with finite resolution. If you don't preserve the original object, and you do this a number of times, then the object will lose its shape. It will fall apart because of rounding errors. And if you want to get rid of the rounding errors, you need to use tricks. You need to store the original shape of the object, and we load it from time to time, or something like this. So in practice, these things matter. They don't matter in this kind of mathematics, where you can perform infinitely many steps in a finite amount of time. But in any practical sense, this doesn't work. So this is basically the transition that my own mind has made from the mathematical tradition that existed before the last century and the one that was invented in the last one. Actually, constructive mathematics is much older than this. But in classical mathematics, constructive mathematics, I think, was seen as a Viet aberration. And from the perspective of computation, it's part of mathematics that actually works.
SPENCER: Okay, I think I'm beginning to home in on our philosophical differences. I think one thing is I'm not that confident that the universe is computable. So when you say, “Well, you can't really have an infinitely fine grid, you can't have continuous space.” I'm not that confident in that. I'm not saying that the universe is definitely not computable. I just feel undecided on that question. I feel like you're more confident it's computable. Is that right?
JOSCHA: So do you think that universe exists?
SPENCER: Sure. And yeah, in some definitions it exists, absolutely.
JOSCHA: So what does exist mean?
SPENCER: That's a tough one. [laughs] It is there... The stuff happening.. Yeah, I don't know how to define existence. Do you have a better definition?
JOSCHA: I don't know. From my own perspective, for something to exist, it needs to be implemented. And something exists to the degree that it's implemented. That's also true for highly abstract objects: tables exist “kinda, sorta”. They exist as long as you squint very hard, but there are borderline cases where it's not clear whether it's a table or not. And when you zoom in very hard, it's just all a bunch of atoms. And so at which point does the table start, and other things end? It's not that clear. So the table exists to the degree that it's implemented. It exists in a certain context, in a certain cause range description. And for coarse grained objects, I think it's completely obvious that they only exist to the degree that they're implemented. So, you could say that the financial system exists to a certain degree of approximation to the degree to which it is actually implemented. There is a part of the financial system that is a fiction, that is not actually implemented. And that is changing under our eyes and is melting away and so on. But there is a part that is rock-hard implemented, and that is not a fiction. But it's an approximation that changes from time to time. And the physical universe, I think, in order to exist — for instance, to say that electrons exist, the electron needs to be implemented in some sense. — So you could say, “I'm not sure if the universe exists, but electrons exist.” So I can talk about them, because I can measure them, I can interact with them, and so on. They exist to the degree that they are implemented. What does it mean for an electron to be implemented? It means that you have to have a type of particle that has a spin like this, and they charge like that. And spin and charge are defined as interactions with other things that play out in this way. So electrons are a particular way to talk about patterns of information. To say that the universe exists means that a certain causal structure exists that gives rise to the observations that I make in a regular fashion. And this, to me, means there is an implementation of some sort. And I can talk about the existence of the universe to the degree that I'm able to discover a language in which I can talk about its existence. So the inconvenient thing is, if I am unable to describe what existence means, then it could imply that existence doesn't mean anything and the universe doesn't actually exist.
SPENCER: Yeah, I think of existence differently. I have this way of looking at existence — and also looking at truth — as being fundamentally ambiguous, that when we use the word exists, or we use the word truth, we actually mean a bunch of different things in different contexts. I call my particular way of looking at this ‘the seven realms of truth' because I've been able to find sort of seven different ways we talk about things existing or things being true. (I'll put an article in the show notes for anyone interested.) But one way we talked about things existing is like, they are there in physical space — like the way you might say, “Ah, there's fields.” There's something out there that physics is executing — and that's one type of existence. Another type of existence is the idea of an electron. Like, in your brain, there's an idea of an electron. In my brain, there's an idea of an electron. In a database, there's an idea of electrons. In Wikipedia, there's an idea of electrons. And to me, that's a different definition of existence. So, I'm not saying that one of them is the ‘right way to exist', I'm just saying that's a different form of existence. And then you could still have yet another type of existence, which is when I use the word electron, you have an understanding of what that means. And when you use the word electron, I understand what that means. And that's what you might think of as intersubjective existence. Like the language, the word electron exists in the sense that there's sort of a shared understanding of what the word electron means. And you can keep going through them. And so I kind of have seven disambiguations of what we mean by existence. And then with these disambiguation, you can then ask questions like, “Well, maybe some of these are actually the same as each other, maybe you only need five, not seven.” But then you start getting into sort of very controversial philosophical territory, when you're saying, “Well, this way of existing is the same as that way of existing and it's hard to get consensus.”
JOSCHA: What I like about this treatment is that it describes the fact very well that existence is a homonym—the same word means many different things. And these things change depending on the context. What I don't like about it is that you do no further disambiguation, but you just, basically, throw whatever you fit in the context at the situation and don't bother with the disambiguation. You're modeling your own thinking, and I don't think that you can do this. I think that you need to have tight and narrow definitions. You can have multiple ones and that's fine. Ultimately, it is only important that we know what we mean by our concepts. And then, you describe which one applies exactly to which context, so it's no longer fundamentally ambiguous. I don't think that if you do philosophy, or if you do modeling as a mathematician, you are entitled to arbitrary ambiguity. I think that's a cop out.
SPENCER: Yeah, I think my difficulty there is that I think it's so difficult to define existence in almost any of these cases. You can point to it. I can talk about, “Ah, well, something can exist in the book, Moby Dick,” that's a certain kind of existence that's in that book. And I can point out how that's not the same as an electron existing in physical space. But I have a very hard time coming up with formal definitions of these different types of existence. But I like trying to disambiguate them. And then once we have the disambiguation, then we can sort of talk about the seven types and then kind of relate them to each other. But yeah, I have basically failed to formally define them.
JOSCHA: Yeah, but I think that with a little bit of time and attention, we can figure this out. So basically, with respect to Moby Dick, we would probably go in the direction where we say that the whale exists in the book in the sense that somebody who is capable of mapping English language to mental simulations and has undergone the following training in the world and basically makes sense to have the following conceptual space, so they're able to produce the following mental simulations, will have, by reading Moby Dick, a mental simulation of that whale. In this sense, the object is in there, it's being referred in such a way that it connects to the conceptual and perceptual representation space of the person who reads the book. So there is a reference to the system of meaning that the individual has built into their own mind in order to interpret the shared reality.
SPENCER: I liked that a lot because I do think that that helps concretize it. I think my concern is, I feel like it bakes in a certain primacy of the sort of physical representations like it's saying, “Well, it has to do with the mental simulation someone has of a whale when they read it,” as opposed to, say, statistical properties and language that connect the word whale to other words, or other aspects of the fact that the word whale appears in Moby Dick.
JOSCHA: So the reason why I put this in there is because the statistical relationships that the word whale has to other words is to the reader only important to the degree that the mind of the reader has discovered these relationships. So, it depends on the actual structure that exists in the mind of the reader that makes the book intelligible to the reader. It's not dependent on the statistics and the language itself. You can argue that the statistics of the language itself capture these relationships to some degree, and the mind of the reader is modeling a similar thing. But that's not the point here, the mind of the reader actually learns it the other way around. We first learned about the word by pointing at stuff and seeing and feeling, attaching the word and hearing the word, and then making a multi-modal model that is combining all these modalities into one coherent thing. And then we learn the syntax of symbols. And later on, we learn the relationships between linguistic objects and the things that we are pointing at. So in this sense, we learned the other way around. But I would agree with you that the existence in the mind of a reader is different from existence as a physical object, as a causal structure that the mind of the reader refers to. Because the mind of the reader can contain things that cannot exist in the physical world. And you have to find objects in the physical world in such a way that they cannot exist in a contradictory sense. They need to consistently exist in a frame of reference.
SPENCER: Before we wrap up in a few minutes, I just wanted to leave going back to the topic of AI. And in particular, I'm really curious to hear your thoughts on where do we go from here in AI? Like, so we've had this revolution in the past years, where deep learning has really come to the forefront, and we're seeing these deep neural nets that are producing state of the art results and generating photorealistic human faces and producing human language (like in GPT-3) and producing music, and so on. So what do you see as coming next?
JOSCHA: In the first years of deep learning, I sought its compositional function approximation—what deep learning is about. In a way, that's true, but it's more narrow. Deep learning is about differentiable programming, which means we write programs and try to evade that they describe a state space that is somewhat continuous. So the solutions to problems are described in such a way that there is a space of possible solutions that can be searched by following a gradient. And deep learning is doing just that. It's basically following gradients in discovering solutions. And this means that the solution has to be expressed in a very particular way. We typically express it using linear algebra, which means neural network. So a neural network is basically a chain of weighted sums over real numbers and some non-linearity is thrown in, so we can do a lot of events if you need to. But, in such a way that is so differentiable, which means somewhat continuous.
SPENCER: I just want to unpack that for people that are not familiar with the terminology. So basically, I think what you're saying here is that, if you take a neural net, and you change one of its inputs a little bit, that changes the output just a little bit. And essentially, this idea of continuity, is that you're not having jumps. If you change one input a tiny bit, you don't get a sudden jump in the output. Am I describing what you're saying accurately?
JOSCHA: It's not completely true. So basically, you can have neural networks where you change the input a little bit, and the output changes dramatically. If the change is significant, and the neural network learns to respond to changes in environment, the neural network is able to discover a function that as its input has pictures of dogs and cats and its output or whether it's a dog or whether it's a cat. And obviously this is a very complicated function. And the fascinating thing, it's possible to learn such functions by describing the input as real numbers — somewhat continuous approximations — and then having weighted by the multiplied first factors [?] sums of these numbers, and then changing these factors, these weights. And there is a search algorithm that allows you to change these weights step by step. And what needs to change a little bit is you need to be able to do very, very small changes. And by making these small changes to the network structure, to the weights in the network, to the factors by which you multiply the numbers internally, you need to get a continuous movement that is somewhat predictable for the output in the right direction. And if you set up your architecture in the right way, if you use the right number of layers and the right number of nodes, and the right set of training data, you're very often able to get a convergence of your model to this. But these models are not optimal for the most part. So for instance, they are not sample efficient, which means you need to feed in images way more often than you need to feed them into the human brain to get to the same degree of resolution or classification. There is also the issue that the model has many more possible states than the world has. An ideal model should have exactly as many states as the domain that it describes and capture exactly the dynamics of the domain and not overfit. And neural networks basically are prone to dramatic potential for overfitting. And this means that you can give them patterns that they have never seen before. And that to a human being doesn't look like anything, but the neural network will say it's an ostrich, because in some mapping that neural network produces this map on to this kind of category. So we have this problem of adversarial examples. And neural networks also have difficulty at learning certain kinds of models efficiently. And this is especially true for causal structures. Basically a neural network has difficulty learning a computer, to learn this piece of code. And the best representation that you can have on your computer, of course, is the program. It's for instance, associated programs in the GPU or some program that can run efficiently on the CPU, and the linear algebra that we use in neural networks — we have basically redesigned our GPUs, our graphics units, to be able to compute them somewhat efficiently. But it's a hack. You basically use lots of libraries that we are applying, not because they make the best possible models, but because they make models at all, and we know how to make them. And we have hardware that somehow can compute them somewhat efficiently. But it's really nothing like the human brain. It seems to be that we are in some kind of bubble. And the question is, how can we get out of the bubble out of a number of deep learning algorithms and libraries that implement these algorithms and hardware that ranks these libraries efficiently into things that are more human-like — not necessarily brain like because we don't maybe need to work like a brain the same way. But in a technical system, we don't have the same constraints. For instance, we can afford determinism, which is difficult to achieve in a biological system. We can have a fixed design, whereas the brain needs to be self-organizing. So everything in the brain needs to converge to the right solution. Whereas we can just impose it sometimes. Or, information transfer over long distances is extremely slow in the brain compared to in our technical systems. — So, we can build things that nature cannot build in the same way. And the technical architectures we can come up with would probably be different from the brain. But still, our brains have discovered things, evolutionists have discovered things that are way more efficient than the things that we have currently in deep learning. And so, what is the third wave of AI systems? These are systems that basically can extend themselves, for instance, into the world that will be able to understand and create languages and integrate with people in a deeper way, and that uses more universal representations that can do program synthesis, just as well as they can follow gradients. This is the stuff that I'm interested in.
SPENCER: What do you think of Codex, the OpenAI system that actually writes code? Do you see that as a step in this direction? Or do you see that as unrelated?
JOSCHA: I'm unsure. I like Codex. I think it's a major achievement, and I don't think that it can do all the things very well that people can do. But it can do a thing very well that people do very often. It can basically look up very well on Stack Overflow, because it has read all of Stack Overflow and memorized it in a way. And it's able to context-sensitively give you pieces of code that somebody else has written or would have written in a seminar context, basically, just by doing the statistics of code. So I suspect that Codex might fall short in giving us the next AI algorithm in a way that nobody has ever thought about. But most of the stuff that we need to do in programming are not that creative, and require a lot of cognitive load that can be taken off our shoulders by such a system, so doing a certain thing very well. There's also another aspect. People like you and me tend to be drawn to computers maybe because we are slightly aspie, which means we are very good at pattern matching over short distances, but we might have difficulty to see the big picture—to basically see, to do art, to go on a very deep level (and don't take this personally if that's not true for you, but I am a much better designer than I am an artist. And I'm not even that good of a designer.) It's just we are drawn to computers because they script reality. The reality of the computer is not deep. It's all flat, conceptual hierarchies that are based on scripts. GPT-3 and Codex are very good at scripting but they might not be as good at perception. And the integration of reflection and perception, I think, is going to be a crucial thing that needs to be understood. How can we take all the perceptual features and combine them in such a way that we get a coherent model of reality that is also consistent with the perceptual data at any given moment?
SPENCER: Joscha, thanks so much for coming on. This is a really interesting conversation.
JOSCHA: Likewise, I enjoyed this very much.
JOSH: A listener asks, “In your opinion, what's one overrated idea or common belief? And, what common wisdom in the culture do you want to push back against?”
SPENCER: I think one really overrated idea is that your in-group has the right answers and the out-group is bad and is undermining everything. And while there's probably an element of truth in that, — like you're in-group probably does have the right answers to some things, and there probably is an out-group that's causing some problems — I would also argue that in most cases, it is an especially problematic belief, because no one group has all the answers that are right. And there are really not very large groups that are all evil. There's gonna be some bad people in these groups. But generally speaking, when you have really large groups of people, they're not just systematically evil or bad. And so I just think that there is more wrong with every group and more right with every group than most people want to acknowledge, because they want to believe their own group has good ideas and the other groups have the bad idea. So I would just advocate looking for the good in everything, trying to find what's right in different worldviews. It doesn't mean you adopt that worldview. Maybe you still stick to your original worldview, but you bring in something. You now have a more accurate understanding of the world than if you just believe that your own worldview has all the answers.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Host / Director