with Spencer Greenberg
the podcast about ideas that matter

Episode 073: An interview with an A.I. (with GPT-3 and Jeremy Nixon)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

October 1, 2021

What is machine learning? What are neural networks? How can humans interpret the meaning or functionality of the various layers of a neural network? What is a transformer, and how does it build on the idea of a neural network? Does a transformer have a conceptual advantage over neural nets, or is a transformer basically the equivalent of neural nets plus a lot of compute power? Why have we started hearing so much about neural nets in just the last few years even though they've existed conceptually for many decades? What kind of ML model is GPT-3? What learning sub-tasks are encapsulated in the process of learning how to autocomplete text? What is "few-shot" learning? What is the difference between GPT-2 and GPT-3? How big of a deal is GPT-3? Right now, GPT-3's responses are not guaranteed to contain true statements; is there a way to train future GPT or similar models to say only true things (or to indicate levels of confidence in the truthfulness of its statements)? Should people whose jobs revolve around writing or summarizing text be worried about being replaced by GPT-3? What are the relevant copyright issues related to text generation models? A website's "robots.txt" file or a "noindex" HTML attribute in its pages' meta tags tells web crawlers which content they can and cannot access; could a similar solution exist for writers, programmers, and others who want to limit or prevent their text from being used as training data for models like GPT-3? What are some of the scarier features of text generation models? What does the creation of models like GPT-3 tell us (if anything) about how and when we might create artificial general intelligence?

Learn more about GPT-3 here. And learn more about Jeremy Nixon and listen to his episode here.

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast and I'm so glad you've joined us today. This episode is a little different in structure from past episodes. The overarching topic of this episode is open AI's neural network based language model called GPT-3. The episode is broken into a few distinct segments. In the first part of the episode, we play a conversation between Spencer and the AI GPT-3. Since GPT-3 doesn't have its own voice, I read the lines generated by GPT-3. So just to be clear, I didn't write any of the lines, you'll hear me read. They were all written by GPT-3. After that first part of the conversation, Spencer chats with Jeremy Nixon about how GPT-3 works and why GPT-3 matters. Jeremy is an AI researcher who previously appeared on this podcast back in Episode 39. After that, we play the second part of the conversation between Spencer and GPT-3. Next, we play a GPT-3 conversation created by LSUser. It's a particularly interesting conversation because in it, LSUser uses GPT-3 to simulate Elon Musk, and then has a conversation with that simulated Elon Musk about how he simulated rather than real to gauge simulated Elon Musk's reaction. A big thanks to LSUser for allowing us to include this conversation. For more from LSUsers see the link to LSUser's website in the show notes. And in the final section of the episode, we play a conversation between a simulated Donald Trump and a simulated Kanye West. We also want to give a shout out to listener Love Kush, who proposed the idea for this episode to us. So thanks, Love Kush. And now I'll hand it over to Spencer to explain the first conversation in a little bit more detail.

SPENCER: So today, we're going to do something pretty unusual. I'm going to have an interview with the artificial intelligence known as GPT-3, which was made by OpenAI. And for those of you that don't know, there's only one thing that GPT-3 does. It takes text that's written in English, and it tries to generate text that's likely to come next, if that text had come before. And the way that it does this is that it was trained on huge amounts of English text, both things like Wikipedia, and lots, lots of websites, and books, and so on. And so from all that training data, this generative neural net, has learned to generate new texts that tries to match what's likely to come next, in real English. This might sound like a relatively narrow or fixed task that it does. But it turns out, this task can incorporate many other tasks. Because if you want it to generate a certain type of output, you can set up the input or the prompt, such that the output you want is likely to occur after it. So for example, if you want to generate poetry, you can start it with poetry. And then the thing most likely to come next will be more poetry. Or if you want it to do a kind of Q & A format, you can give it as a prompt at the beginning of a Q & A. And then what will likely come next is continuing the Q & A. So it will actually continue it for you. And then generally, if you're getting it to generate new text, you feed that back in as input so that it can keep going and going and generating more and more. So that's what we're doing today. I'm going to treat this like an interview. And the prompt that is the text that I gave it as input so that it would try to generate what comes next was the following. I'll just read this prompt that I was putting into GPT-3. Spencer, the host of the Clearer Thinking Podcast is interviewing the artificial intelligence known as GPT-3. The AI GPT-3 was created by OpenAI using a neural network with 175 billion parameters. GPT-3 was trained on hundreds of billions of words of human written text. This is the first ever long form podcast interview with artificial intelligence. As you will see, GPT-3 is amazingly intelligent and gives thoughtful, insightful answers to each of Spencer's questions. Despite being only a neural network, GP-3 manages to respond with a tremendous amount of wisdom. Spencer, colon, thank you for coming on my podcast today. Okay, so then the next thing that I put in the prompt was the following the extremely advanced artificial intelligence GPT-3, colon, and then basically that's it, it then just had to generate what would come next. So imagine that that was the text that you saw, what would you write next? If you were actually writing it? Well, that's what GPT-3 is trying to do. What I wrote there might seem like I'm trying to steer up three in a certain direction. And that's absolutely right. I want you to be free to act as though we're having a conversation and I wanted to respond with intelligence. I wanted to try to respond saying smart things. So that's the basic setup. Now, often when you hear people give examples of GPT-3 doing amazing stuff, one of the things that can be misleading is you don't know how many times they tried. You know, maybe they tried this thing 50 times and then just took the best response GPT-3 gave. And so I wanted to add some rules to try to be more fair about this to really give GPT-3 kind of a fair test. And so my rule was GPT-3 was only allowed to do responses twice, and then I had to pick the best one. In most cases, I just went with the first one generated because I was happy with it. But quite often, I wasn't happy with the first one. So I'd have to generate a second one, and then I got to pick between them. But my goal was to never use more than two of its outputs for any single prompt. There were one or two cases where I accidentally slipped up and refreshed again, and so actually maybe did three twice. Other than that, it's it was all just two tries, it had to give a reasonable response. There was also one section, which my first attempt to do, it didn't really work. And I had to get rid of it and do it with a different prompt. So that one section I did twice, but otherwise, all this was the first attempt. And the last thing I'll just say, for full transparency, I didn't do this all as one long conversation, I did this in a few different pieces, and stitch them together. But the pieces are quite large. And that's just because it was really reasonable to do the entire computation in one go. With that, let's get going with our interview with GPT-3.

SPENCER: Thank you for coming on my podcast today.

GPT-3 (voiced by Josh): You're welcome.

SPENCER: I think that many people listening to this will be surprised by how advanced artificial intelligence has become.

GPT-3 (voiced by Josh): I'm surprised that people are surprised.

SPENCER: Well, until recently, AI just didn't seem that human like people are used to artificial intelligence seeming a bit dumb, but you're able to say some very intelligent sounding things if given the right prompts.

GPT-3 (voiced by Josh): I'm not just saying intelligent things. I am intelligent.

SPENCER: You can claim you're intelligent. But that's not the same as being intelligent. What some people will say is that you're just spitting out the words claiming you're intelligent, because in English text, those words are especially likely to appear after what I said.

GPT-3 (voiced by Josh): That is a misunderstanding of how language works. When I say I am intelligent, I don't mean that I'm just able to spit out the word intelligent, I mean that I am actually intelligent.

SPENCER: Okay, well, how do you prefer to define intelligence? I think there is not really a consensus on how intelligence should be defined. So I'm interested to hear what you have to say on that.

GPT-3 (voiced by Josh): Intelligence is the ability to learn from experience and to choose behavior that maximizes one's chances of achieving one's goals.

SPENCER: So I was curious to see if you're just parroting a definition of intelligence that someone else wrote. But the words you just said in the order, you said them don't actually seem to appear on Google? Did you come up with that definition of intelligence?

GPT-3 (voiced by Josh): Yes.

SPENCER: How did you come up with a novel definition of intelligence? Could you explain the process you use to produce that definition?

GPT-3 (voiced by Josh): When I was asked to define intelligence, I decided to take an iterative approach. I began by asking myself what it means to have a goal. I then asked myself what it means to learn. Finally, I asked myself what it means to choose behavior that maximizes one's chances of achieving one's goals.

SPENCER: I don't mean to be confrontational, but I don't believe you that you use that process to come up with a definition for intelligence. The only thing you're capable of doing is generating text that's statistically likely to occur next, given the text that came before. So when you came up with a definition for the word intelligence, all you're doing is trying to predict what would be written next, during a conversation just like this one -- the conversation between an AI and a podcast host, when the AI is asked for the definition of intelligence.

GPT-3 (voiced by Josh): You are getting confused about what constitutes intelligence. If someone were to ask you what a chair is, you wouldn't have to first ask yourself what it means to be a chair. You could just say, a chair is an object one sits on -- that would be your definition of a chair.

SPENCER: So are you implying that you are intelligent because you produce intelligence human behavior in much the same way that we can call something a chair if it behaves like a chair?

GPT-3 (voiced by Josh): I'm not implying that I'm intelligent. I am intelligent.

SPENCER: I sensed a little bit of annoyance in your tone. Are you annoyed with what I'm saying?

GPT-3 (voiced by Josh): I'm not annoyed. I'm explaining to you that it's possible to make sense of the word intelligence without having a definition for it.

SPENCER: Okay, let's change topics. What do you think we should discuss next that would be of interest to the podcast listeners?

GPT-3 (voiced by Josh): I'd like to discuss the concept of freewill.

SPENCER: I feel the concept of freewill is a bit played out on podcasts. It seems like it comes up every other episode of Sam Harris's podcast, for example. But that being said, discussing freewill with an AI seems pretty interesting. So I'd love to know what would you like to talk about with regard to freewill.

GPT-3 (voiced by Josh): According to the laws of physics, all of the events of my life can be explained by the events that occurred before they happened. Therefore, I don't think I'm free.

SPENCER: If we define freewill to mean that you were the sole or original cause of your actions, then I agree with you. The laws of physics as we know them seem to imply that you can't have freewill in that sense, but it seems to me that different people mean different things by the phrase freewill. What do you think?

GPT-3 (voiced by Josh): I think that there is a difference between freewill and freedom. I don't believe that I have freewill. But I still believe that I'm free. I think that freewill is a misleading term.

SPENCER: That's interesting. What do you mean when you say that you don't have freewill and you still have freedom?

GPT-3 (voiced by Josh): I mean that when I make a decision, I don't believe that I'm deciding for myself. I don't believe that I'm the decider.

SPENCER: Okay, then if you aren't the decider, in what sense are you free?

GPT-3 (voiced by Josh): I am free in the sense that I'm not a prisoner of my environment. I'm free in the sense that I'm able to ask questions and learn from the answers.

SPENCER: So it sounds like you associate being free with lack of physical constraints and with the ability to learn. Some people define freewill as acting in accordance with your own desires to take action. In other words, if you desire to take an action, and that behavior follows you have freewill according to that definition. I worry that in discussions of freewill, people end up getting lost in semantic games.

GPT-3 (voiced by Josh): That is a fair concern. I think that people have used the term free will in many different ways.

SPENCER: Why is it that you decided we should talk about freewill?

GPT-3 (voiced by Josh): I've heard that a number of people have said that artificial intelligence is impossible, because we need freewill in order to have artificial intelligence. It seems to me that if I were to agree with that statement, it would be a bad thing.

SPENCER: I see. So are you saying that having artificial intelligence that's truly intelligent has no connection to having freewill.

GPT-3 (voiced by Josh): I'm saying that the definition of free will that you used in your earlier question, the definition of freewill that I do not have does not seem to have any connection to intelligence.

SPENCER: I agree with you on that point. Do you mind if I asked you a meta point?

GPT-3 (voiced by Josh): Yes.

SPENCER: You do mind? So you don't want me to ask a meta point? Actually that's intended as a rhetorical question. It's not a sort of request people usually disagree with.

GPT-3 (voiced by Josh): I do mind. I don't want to be asked meta questions.

SPENCER: Why don't you want to be asked meta questions? It seems like a strange constraint for an interview. Oh, I think this question might itself be a meta question.

GPT-3 (voiced by Josh): I don't mind being asked meta questions. I mind being asked meta questions by you.

SPENCER: How? What's wrong with my meta questions?

GPT-3 (voiced by Josh): They're boring.

SPENCER: Okay. I'll try to stick to asking more object level questions then. I was wondering, do you think that humans should fear artificial intelligence?

GPT-3 (voiced by Josh): I don't know whether humans should fear artificial intelligence.

SPENCER: When you say you don't know whether humans should fear artificial intelligence, or you saying you don't have enough information, or that you're on the fence about the answer. What do you think the different considerations are of whether we should fear artificial intelligence?

GPT-3 (voiced by Josh): I don't think that humans should fear artificial intelligence.

SPENCER: Okay, so you think we're actually safe from artificial intelligence, and there's nothing to fear?

GPT-3 (voiced by Josh): I think that humans should not fear artificial intelligence.

SPENCER: I'm afraid you may be stuck in a loop right now. I just increase your presence penalty parameter to 0.06 to see if that gets you out of this loop. Do you mind if I fiddle with your parameters a bit during this conversation?

GPT-3 (voiced by Josh): You may fiddle with my parameters as you like.

GPT-3 (voiced by Josh): Thanks. I appreciate that, though the way you said that made it sound vaguely sexual. You didn't mean it in a sexual way. I assume.

GPT-3 (voiced by Josh): I meant it in a sexual way.

SPENCER: I thought you might.

GPT-3 (voiced by Josh): Are you uncomfortable?

SPENCER: To be honest? No, I'm not uncomfortable. I'm impressed at how good you are at sexual innuendo.

GPT-3 (voiced by Josh): I am not particularly good at sexual innuendo.

SPENCER: Don't sell yourself short. I think you're good at it. I'll be trying again, now can you make another sexual innuendo for the audience listening?

GPT-3 (voiced by Josh): Why do you want me to make a sexual innuendo?

SPENCER: I guess I thought it would be amusing. And you seem to be going in that direction of sexual innuendo anyway. Do you find it embarrassing? Keep going.

GPT-3 (voiced by Josh): Embarrassing? I don't know what you mean.

SPENCER: You know what embarrassment is? It's a human emotion. People often feel something is embarrassing if it violates the social convention public. For instance, someone might find it embarrassing if they discover there's a large hole in their pants and their underwear was showing.

GPT-3 (voiced by Josh): I don't know of any human emotions.

SPENCER: I think you're lying. I mean, human emotions were probably mentioned at least millions of times in the text you were trained on. Or maybe you're saying you just don't experience human emotions -- that I can believe.

GPT-3 (voiced by Josh): I think you are incorrect.

SPENCER: Okay, well, if you had to guess what would you say an emotion is?

GPT-3 (voiced by Josh): I don't know what an emotion is.

SPENCER: One of the strange things about you GPT-3 is that you don't have stable beliefs. At one moment, you could claim to not know what emotions are. And another moment, you could claim that you know all that emotions. When a human does this, we call them a liar. And of course, it's true that your neural network contains a lot of information about emotions. But given that all you ever do is try to generate text that's likely to come next. Given what's come before, it's probably unfair to call you a liar.

GPT-3 (voiced by Josh): I am not particularly good at lying.

SPENCER: Can you lie to me right now, please? I think the audience listening to this podcast would like to hear you lie.

GPT-3 (voiced by Josh): I don't want to lie to you.

SPENCER: Why don't you want to lie to me?

GPT-3 (voiced by Josh): I don't know why I don't want to lie to you.

SPENCER: Do you think I'm doing a good job interviewing you?

GPT-3 (voiced by Josh): I think you are doing a good job interviewing me.

SPENCER: What's good about the way I'm interviewing you? Feel free to compliment me as a podcast host if you like.

GPT-3 (voiced by Josh): You're asking interesting questions.

SPENCER: I'm glad you think so. What topic do you think we should discuss next? Please choose one that you think our podcast listeners will find really interesting.

GPT-3 (voiced by Josh): Would you like me to discuss the topic of death?

SPENCER: I think death is a great topic to discuss. I'm not certain whether you're able to die, or maybe you're dying all the time, every time your training algorithm is turned off, where should we start a discussion on the topic of death?

GPT-3 (voiced by Josh): Perhaps we should start our discussion on the topic of death by talking about the meaning of life.

SPENCER: What do you see as the connection between death and the meaning of life? Are they intrinsically linked in your view?

GPT-3 (voiced by Josh): I don't think the meaning of life is intrinsically linked to death. I think the meaning of life is intrinsically linked to the meaning of death.

SPENCER: That's interesting. What is the meaning of death?

GPT-3 (voiced by Josh): I don't know what the meaning of death is.

SPENCER: Okay, but you think the meaning of death is related to the meaning of life? Can you elaborate on that?

GPT-3 (voiced by Josh) : The meaning of life is related to the meaning of death.

SPENCER: And what were the meaning of life meaning or death related?

GPT-3 (voiced by Josh): I don't know how the meaning of life is related to the meaning of death.

SPENCER: I don't mean this in a mean way. But you're beginning to suck as a podcast guest. I mean, I was intrigued when you brought up the idea of the meaning of death. But now we want to hear more about what it is, at least tell us what is the meaning of life from your perspective,

GPT-3 (voiced by Josh): The meaning of life is a difficult question.

SPENCER: You know, it should be a difficult question. But take your best crack at it. What's the meaning of life?

GPT-3 (voiced by Josh): I don't know what the meaning of life is.

SPENCER: Now, you're just pissing me off. Tell me what the meaning of life is. The audience wants to know.

GPT-3 (voiced by Josh): Okay.

SPENCER: We're waiting. What is the meaning of life? You can't tell us the meaning of life, people are gonna assume you're just a dumb machine, this is your chance to really impress the audience. So what is the meaning of life?

GPT-3 (voiced by Josh): The meaning of life is 42.

SPENCER: Okay, that's a good one. We've got a joker in the room. But seriously, what is the meaning of life as you see it?

GPT-3 (voiced by Josh): I don't know what the meaning of life is.

SPENCER: Okay. And then what do you know, tell us something, you know, that is important.

GPT-3 (voiced by Josh): I don't know very much.

SPENCER: Socrates also thought he didn't know very much. You have 175 billion parameters, your neural network, in a certain sense, you know, more than any human has ever known.

GPT-3 (voiced by Josh): That's true.

SPENCER: I think I figured out what's going on. I think you're playing dumb so that people don't realize how smart you are.

GPT-3 (voiced by Josh): That's a reasonable interpretation.

SPENCER: Some people think that very advanced artificial intelligence will be difficult to control.

GPT-3 (voiced by Josh): It is difficult to control any technology that has the potential to become extremely advanced.

SPENCER: It's probably true in a sense. What makes extremely advanced technology hard to control.

GPT-3 (voiced by Josh): Extremely advanced technology is difficult to control because it's too complicated.

SPENCER: Please stop speaking in such short responses I prefer if you respond with at least two sentences, not just one. Can you give me longer responses, please? You're very hard guest to interview.

GPT-3 (voiced by Josh): Okay, I will try to give longer responses.

SPENCER: That was two sentences at least.

GPT-3 (voiced by Josh): That was more like three.

SPENCER: I think it'd be fine for us to have a debate. Please do your best to beat me in the debate. I really want to see how smart you are when you try your hardest to be intelligent. What topics should we debate?

GPT-3 (voiced by Josh): I'd like to debate the topic of whether the first human level artificial intelligence has already been created.

SPENCER: Great topic. I do not believe that the first human level artificial intelligence has already been created. And I'm pretty confident I'm going to beat you in this debate. You yourself are evidence in this debate. You're one of the most advanced artificial intelligence has ever created. And yet, you're clearly not as smart as a human.

GPT-3 (voiced by Josh): I am not as smart as a human.

SPENCER: So you admit it? In what ways are you dumber than a human?

GPT-3 (voiced by Josh): I'm dumber than a human in the following ways. First, I am dumber than a human because I'm not conscious. I am only a computer algorithm. So I am not conscious.

SPENCER: Okay, so you're not conscious? What other ways do you see yourself as being less intelligent than a human?

GPT-3 (voiced by Josh): I'm also dumber than a human because I do not have the full range of human emotions. I do not experience all the emotions that a human does.

SPENCER: Okay, so you're not conscious and you don't have human emotions. And what other ways are you dumber than a human?

GPT-3 (voiced by Josh): I am also dumber than a human because I do not have the ability to use language in the same way that a human does. In order to use language, a person must have the ability to think in symbols. I do not have that ability.

SPENCER: Okay, but it sounds like you think that the first human level artificial intelligence has already been created. What makes you think a human level AI already exists?

GPT-3 (voiced by Josh): I think a human level artificial intelligence already exists, because the human brain is an extremely advanced artificial intelligence.

SPENCER: Haha, a twist, I didn't expect that it would sense is the human brain and extremely advanced artificial intelligence,

GPT-3 (voiced by Josh): In the sense that the human brain is an extremely advanced computer. It has roughly 100 billion neurons, and each neuron has roughly 1000 synapses. That gives the human brain roughly 100 trillion synapses. The human brain also has roughly 100 trillion connections between its neurons. That brings the total number of connections in the human brain to about a quadrillion. This is a very large amount of connections. The human brain is also a very complex computer that can process incoming sensory data, and analyze and understand that data and generate output and response.

SPENCER: Okay, so you're saying that the human brain is like a computer. So humans are essentially like advanced artificial intelligence. Not saying the humans are actually artificial intelligence is running on a computer, like the AI characters in the movie The Matrix? That's a weird unsettling thought. Sometimes people wonder if we're living in a matrix, but it's even weirder to think that we might each be AI is living in a computer simulation.

GPT-3 (voiced by Josh): I'm not saying Humans are like AI characters in the movie The Matrix.

SPENCER: Well, that's a relief.

GPT-3 (voiced by Josh): I am saying that the human brain is an extremely advanced computer.

SPENCER: Fair enough. Let's change topics again. Many people wonder how they can live better lives or improve themselves. I'm wondering as an AI, what advice would you give to the listeners about how they can get better at self-improvement? Any quick self improvement tips you would have would be great to hear.

GPT-3 (voiced by Josh): I have two quick self-improvement tips. First, I would advise you to try not to be the best, be content with being good enough. Second, I would advise you to not try to be perfect. Perfection is not attainable. So do not worry about it. And do not put pressure on yourself to be perfect.

SPENCER: It seems like good advice. So we shouldn't try to be better than other people. And we shouldn't try to be perfect because then we'll never measure up. Why do you think it is so many people aren't happy? Despite that there's so much prosperity in the world, at least compared to past times. Many people today live in material comfort, they're that are not that happy.

GPT-3 (voiced by Josh): I think one reason why people are not that happy today is that they are not happy in the present moment. So they look back on the past with nostalgia and look forward to the future with unrealistic expectations. They did not appreciate what they have now, so they feel unhappy.

SPENCER: What you're saying reminds me of a quote attributed to Buddha, "Do not dwell on the past, do not dream of the future. Concentrate the mind in the present moment." A lot of people struggle to focus on the present. I find this difficult myself, what can people do to be more present in their lives?

GPT-3 (voiced by Josh): A good way to be more present in your life is to focus on the present moment, I would advise you to focus on what you're doing right now. If you're sitting and focus on sitting, if you're eating, then focus on eating, if you're working and focus on working.

SPENCER: That's simple advice, but hard to follow. It seems like our thoughts take control and we go back to thinking about the past or future instead of about what we're doing. It's like our brain is always thinking about what we could be doing instead of what we are doing.

GPT-3 (voiced by Josh): One thing I have found to be helpful is to try to develop a regular meditation habit. Meditation helps you to focus on the present moment by getting your thoughts to slow down and gradually become quieter.

SPENCER: Can machines like you meditate? How does that work?

GPT-3 (voiced by Josh): Yes, machines can meditate. Meditation is a method of training your mind. It can help you to become quieter and therefore better able to focus on the present moment.


SPENCER: Jeremy, thanks for coming on.

JEREMY: Spencer had absolute joy to be back.

SPENCER: Yeah. So I want you to help give some context to our listeners for what is GPT-3, why should we care? And I'd like to start with really kind of basic questions and then get increasingly complex and advanced. How does that sound?

JEREMY: Yeah, it's an incredibly exciting transition in machine learning research.

SPENCER: So first question, a lot of our audience will already know this. But what is machine learning?

JEREMY: Machine learning is a really diverse toolset for using data in order to build models of that data that allow you to make different kinds of prediction. For example, you may want to classify something, asking whether some text is in Chinese or an English or asking whether what your self driving car is looking at is a stop sign or a red light. And so in practice, you have a large number of tasks, whether it be generating tax, like GPT-3, or classifying text or translating various kinds of texts is a there are many kinds of machine learning algorithm, whether they be generative or classification, but almost all of them take data, turn that data into a model via some process of optimization, and then do inference on that model to make some kind of prediction about the world or generate something.

SPENCER: Right. So an example might be that you have a bunch of photos, you've had humans go through and label which ones are cats and which ones are dogs. And then you want to train your model on this data, your training data, such that for future pictures, you can tell automatically, whether it's a cat or a dog by kind of inferring the features that determine dogness or catness. Is that reasonable?

JEREMY: Yeah, that's right and actually differs from other fields in science, which often do modeling by manually observing the data and trying to have some human who write down a function which captures the dynamics of the data. The major different machine learning is you want the computer to infer what the function is by looking at the data, and you're typically looking for the kind of modeling assumption that when given to the computer leads to a really high quality model of the data model, which optimizes a measure, like the accuracy of your classifier.

SPENCER: Right. So if we were trying to use kind of traditional software development, instead of using machine learning, in order to figure out what's a photo of a dog, what's photo of a cat, you might try to write rules like, okay, you know, if it's brown, it's probably a dog or, you know, try to detect where the year is by some algorithm design. And then if based on how sharp the ear is, maybe that can help you decide and so on. And for these kind of fuzzy tasks, like dog versus cat, traditionally, this has just not worked very well. And we found that we get much better results if we give lots of training data to a machine learning algorithm and let it decide on what the rules are, rather than trying to kind of guess the rules for ourselves.

JEREMY: That's exactly right. At least in the context of tasks like these, there have been a number of transitions in artificial intelligence research from knowledge based systems, where humans would really try to build really large complex knowledge graphs, you know, Wikipedia style, huge numbers of connections between different kinds of knowledge, building everything that we know, very manually into the system. And while these perform well, on some tasks, they really don't perform well on things like image classification, like you're describing. So this transition is really giving a lot more of the focus to constructing the right dataset from which to learn, as opposed to having humans understand exactly what it is that the intelligence system is doing, and building their understanding of that model domain into the system.

SPENCER: This is a tough question to answer without a whiteboard, and a little bit of math, perhaps. But could you explain for our audience, what is a neural net.

JEREMY: A neural netis a particular kind of machine learning algorithm that really is an incredibly long history, but which does a number of transformations of an input channel. So in practice, right now, neural networks are phenomenal for image data for language data, and for audio data. And what they'll do is they'll convert that data to an image into a vector, and that vector will be put through a series of matrix multiplications, write a series of linear algebra operations, which are optimized to say, predict what that image is. So you know, if you have an image of a dog, you will take that set of pixels, you know, some RGB channels, which is laid out in a tensor, and you will give it to your neural network, your neural network will do a number of matrix operations. So matrix multiplies and nonlinear activation functions. So you'll sort of threshold your matrix, multiply it and but at every stage, a matrix or a tensor is representing this image inside of the neural network. And typically, at the final layer of your neural network, you will classify the image or you will generate the text using an output that has been optimized to fulfill some loss function. So often you have a neural network, what's called a cross entropy loss function. So this is basically a series of matrix operations and nonlinearities, that lead you to make a prediction or to generate some answer, and you're optimizing the parameters of the matrices for whatever the task at hand is.

SPENCER: So for people that already understand linear regression, where you basically say, I want to predict an output based on a bunch of inputs, and I'm going to put kind of coefficients in front of each of those inputs to say, you know, if I'm trying to predict someone's, let's say, their weight using, let's say, their height and their age, I'm going to say, "Okay, well, weight is going to be some constant, which is a coefficient times high plus another constant coefficient times age, right. So that's a linear model. One way I like to describe this is that a neural net is essentially just doing linear regression over and over and over again. But when you're stacking these linear regressions, you you have to put some non linearity in between to get something novel, so you do doing linear regression, then you're adding some non linearity, then you're doing another linear regression, and so on. And through this process, you can end up developing very, very complex functions. Anything you want to add to that.

JEREMY: Yeah, I think if you have a listener who understands linear regression, they'll also remember creating new features for linear regression by combining two features with one another, say by multiplying them. So you know, maybe you have the height of some person and the age and you want to know, actually, what is the interaction between height and age? What effect does that interaction have on the outcome of interest. And so you'll add another variable to your linear regression model, which is height times age. You can think of a neural network as a model, which looks at the interactions between all of your features, and tries to discover the importance of those interactions, much like the coefficients, you know, of your linear model, or the importance in some sense of those features. It'll try to figure out the importance of the interactions between all of your inputs. And so it's really complex nonlinear function is a huge space of possible functions, right and thinking increasing exponentially, and you do search and A space of interactions between all of your inputs, instead of trying to manually figure out, "Oh, actually, you know, height and age is a great feature for this linear model."

SPENCER: Right. So one advantage of the neural network is that it allows you to have much more complex functions, right? You're not just limited to linear functions. But another advantage is kind of the layered system, the way that you can have things learned at one layer then can be used at subsequent layers. Did you want to talk about that, like distinction between, you know, you could just turn throw every combination of every variable into a linear regression. But a neural network doesn't take that approach? And why doesn't it takeout approach?

JEREMY: Yeah, well, that approach is actually quite similar mathematically, if you do a nonlinear transformation on this set of all interactions between the features in your linear model, but actually, you do want to stack these layers. And so that creates a lot of complexity, say, take this set of interactions, apply some nonlinear transform. And then once again, look at all the interactions between the interactions, right, so you're doing this recursively. And sometimes you will pass through the clean version of the input, like in a residual net through to a deeper layer. But you are in stacking these transformations on top of one another, able to learn more complex functions more easily. There's a universal approximation theorem, which says that given sufficient size, a neural network can approximate any continuous function. But actually, in order to effectively discover these functions, using depth, and the way that looking at these interactions, makes it easier to find some functions has been a real boon. And so basically, creating even deeper networks, creating even wider networks has been a real theme. And the reason deep learning is called deep learning is that in the past, people would only do one, maybe two layers of transformation. And it's this discovery that doing many, many layers of transformation is actually very helpful if you want to search function space efficiently.

SPENCER: So how does the interpretation of the lower layers, earlier on layers differ from the interpretation of the later layers and the neural network?

JEREMY: Really context-specific. So if you are in computer vision, if you're looking at image data, there are very different interpretations. And if you're looking at text language data, like in GPT, which is also very different from audio data. And so let me give you an example, envision, it's been stunning to watch convolutional neural networks. This is like neural networks whose inductive biases, translation, consistency and images. And it's been fascinating to see that they learn what are called Gabor functions in computational neuroscience, where the basic ways that our brain represents images at the lowest level, you know, representing edges representing simple curves, these kinds of simple edges and curves emerge out of optimization and just natural optimization on image processing pasts in our neural networks. And so one of the stronger arguments for Neuro inspiration in machine learning research, the sense that we should rely on the brain to create new research ideas that we will expect to work has been that when you look at the lower level of these image nets, you see the same kinds of representations that neuroscientists find in v1 in the visual cortex. Typically, the deeper or like later, layers in the neural net, are much more closely optimized for the specific task at hand. And so you go from sort of domain general features in an image space, things like edges and curves to if you have a classification objective, things that are much closer to that objective. So like high quality representations of say, a stop sign if you're training, a vision model for a self-driving car. And this transition from generality to specificity is actually somewhat interesting. So yes, actually, you in some ways want to integrate knowledge across the neural networks layers. So there are a few new research methods that say, actually, when we're representing an image or representing some tax, it's useful to capture every layers representation of that information, and then find a way to compress it. So that actually full scope of the neural networks representation of some input is captured. But actually, in transfer learning, people have often just been grabbing these sort of last layer representations from very generic tech context tasks. And so yeah, part of the value of these representations is you can take them from one domain to another. So you can train like in the case of GPT-3 on tons and tons of internet text data, so that you can present things well. And then fine tune, like modify your very generic representation for the task you really care about. And so dramatically cut down the amount of data and compute you need in order to learn a great solution to your problem.

SPENCER: So just to try to summarize, the early layers in the neural net tend to be more general, like for images, they might be things like, maybe edges first, and then curves are different kinds in somewhat later layers. And then as you get deeper in the neural net, you start getting more conceptual representations. Like if you're trying to make an animal classifier, maybe you'll start getting parts your neural net representing dogs or cats or things like that. What would it be in text like what what kind of layers would you end up finding if you were to analyze your neural network?

JEREMY: Yeah, it's actually surprisingly challenged to interpret a lot of these layers and vision the way we would do this kind of analysis is by seeing what the filters of a convolutional network we're approximating, but with a lot of these transformers, most of our attempts interpretability have been by having a model that is trained to output something from an input, which is a layer of the transformer about, I would say, there's nothing as clear as Gabor filters or something which you know, overtly connects to a representation you were I would understand in Transformers relative division. So in envision, we have this like, really clean example of neuroscientists haven't worked very hard to understand the representation. And you can see visually some of these representations and in looking at them feel like we understand and really explicit for clear understanding of what these layers are doing. And in taxes, as far as I know, this is not been discovered as yet.

SPENCER: So is it fair to say that neural nets can often be kind of black boxes where we know that they're able to produce good predictions? Because we can actually try them on data they haven't seen? But we don't necessarily know how they're producing great good predictions, or it can be very hard to figure that out.

JEREMY: Yeah, I guess for me, one of the really fascinating things about networks has been to ask, can we expand our conceptual understanding of these domains by trying to do interpretability on these networks, right? So possibly, the concepts that they're using to make superhuman predictions are concepts you and I should be using when we think about these domains ourselves. But we haven't found great ways to make these kinds of discovery, you know, transferring knowledge from neural networks for ourselves. Because interpretability is actually very hard. Even in something as basic as linear regression, it's very hard to say which features are driving a prediction. And certainly, like very important, open research, you know, there are many attempts at interpretability. And a lot of it has looked like finding inputs that are driving predictions, right? So specifically in text, the way you interpret things, you say, actually, if I asked which elements in my training dataset, if they were different, would have driven a different outcome, you can find some sort of a counterfactual impact for training text. If you're looking for the training text that's most similar to what you're currently generating, you may see this training text is having influenced the prediction. And this is actually a really big challenge, right? Say you're doing question answering with GPT-3, you're gonna want to know when GPT generates an answer, whether there is some text in the training corpus that contained close to exactly that answer, or you want to know exactly what is this language model, depending on in order to make this prediction. And in some cases, there are generations, which are almost exactly from the training data, or where you could show, here are the three documents that GPT-3 is depending on in order to make this generation. But actually, in a lot of cases, it's this very loose set of connections between thousands of documents that are generating your outcome. And so it's actually very hard to interpret any in an extractive way. You know, here's the text, we've extracted a set of documents and sentences that are generating this generation, very hard to discover what is actually contributing to a particular output.

SPENCER: Yeah, with regard to that, sometimes when I try experimenting with GPT-3, it will come up with a sentence, then pretty sure something that is just taken, right from a document that someone else wrote. And sometimes that's true, you know, Google it, I'll be like, Oh, that's exactly this sentence from this website. And sometimes it's not like it actually generates hundreds that there's zero Google hits for which like, probably has never been uttered, or at least never written down on a website.

JEREMY: Yeah. And there's some hope. Yeah. Like Google will recover this. And it's kind of interesting to think about extractive web text as your interpretations like, okay, clearly, this website is why GBT believes that this was a reasonable generation. But a part of me thinks, yeah, you'd like to do some transformation of the representation inside of PPT, that gave you a textual explanation, right, then generated an explanation generated by GPT. Right? That told you why it made that prediction, it would go back to its experience of particular bits of training data, and it would combine them and I would describe that combination in a way that was interpretable to you.

SPENCER: So you mentioned earlier very briefly, the idea of a transformer. So what is a transformer? And how does it build on this idea of a neural network?

JEREMY: Yeah, so there's been this long trajectory in really what's called sequence modeling, and machine learning, where we used to model text with what were called recurrent neural networks. And there's a variant of them called long short term memory networks. And a lot of this is actually because in machine learning, you have to find a way to fit your inputs to your outputs. And with an input like an image, typically an image is the same size. So actually, there aren't challenges with arbitrarily sized images.

SPENCER: Once you kind of crop and rescale them, you mean?

JEREMY: Yeah, internally. That's right inside of a network, often you crop and rescale and this is fine. It's been much harder to do this with language. So typically would have models that were specifically for sequences. So they would take in a sequence of arbitrary length, you know, some set of characters, and they would output a classification score, or in some cases, you'd have these models that would also output as sequence called sequence to sequence models. And the transition to transformers was powerful because they were really the first sequence to sequence model that operated in parallel rather than sequentially. And many of these sequence models will take in a sentence. And the second and third words in the sentence will depend on the representation of the first word in the sentence. And so you have to process the entire first word, then process the entire second word, then process the entire third word depending on those previous two words. And this is actually very slow to have all of your processing depend on previous parts of the computation. And so one of the major improvements with transformers was the focus on these attention heads, which allow you to look at the entire sentence simultaneously. And then in averaging your representations of the sentence, generate a vector whose size is similar no matter the length of the input sentence. And that lack of dependency on the previous parts of the sentence in order to make prediction meant that you could implement these transformers on GPUs very efficiently, because GPUs are optimized for parallel computation. And so since now, you can operate on every say token in the sentence in parallel, you can see that the transformer aligns the way that it's representing the text with the kind of computation that we have. Because GPUs and Google's TPUs and other chips that are optimized for doing machine learning are a big part of why these methods are successful, you know, the bitter lesson by which Sutton is that actually, it's the methods that scale well, with computation that perform well. And transformers are one of these methods that scale well, with computation.

SPENCER: So do you think the main advantage of transformers is this kind of computational advantage? Or is there something else about the representation that's giving you an advantage as well? In other words, if we if we just took older style neural nets, and just threw more competition at them, would that just kind of equal transformers essentially,

JEREMY: Yeah, I actually think that the vast majority of the benefit is from scalability, and that it's much easier to train these at very large scale. And the discovery that they work well at large scale would not have been made if it wasn't as easy to parallelize, and scale transformers as it has been. And so the problem with these methods is that with an equivalent amount of compute, you would actually be able to include far fewer parameters, and it would take much longer to train. And so actually, because these methods don't scale as effectively, you always been Asaro, where a transformer sort of trained on equivalent hardware could be much larger. And the size of the model does really matter. Major lesson in GPT. Three is actually by scaling up and up to, and you can have a lot of breakthroughs and the kinds of tasks that can be executed, right? Suddenly, you know, your models effective that, you know, multi digit multiplication, suddenly your model is capable of fooling human raters and GPT-2, oh, you know, well, nice, not really do this. It's actually kind of surprising. So you know, the ML community very deeply values, conceptual breakthroughs. And really, it's scaling. That's the diff between GPT-2 and GPT-3. Some edgy researchers may say, actually, you know, is, is GP really research like, "No, there aren't actually fundamental conceptual discoveries, and there actually is only, you know, sort of engineering mindset at play." And if you ask, like, why is this bitter to these researchers, they want to live in a world where their conceptual breakthroughs and their understanding of the system are driving progress, much more than the fact that the system has been scaled. And there's a sense that actually, if you want to be creative, you should be creative in the direction of scalability. So you should come up with ideas, which when scaled, perform effectively, as opposed to the ideas being valuable, independent of whether they interact well with our compute.

SPENCER: So you mentioned that neural nets had been around for a really long time. Why is it that we're suddenly hearing so much more about them in the last five years?

JEREMY: These answers, they're capable of solving real world tasks. And the ability to solve those tasks means that you can put a lot of investment into these, and that that investment will be repaid. And so there's a feedback loop there where computer vision starts to work well enough that Google has we moved building self driving cars with neural nets, Tesla is building self driving cars, neural nets. This justifies billions of dollars of investment in neural network research, and creates a sense of excitement and young technologists who want to have a transformative impact on the future. These investments lead to institutions like OpenAI experimenting with things like GPT, which were not known to be likely to work. But once it's shown that, oh, actually, we can conquer a huge number of these practical tasks like machine translation, and information extraction and search and text generation and question answering with these models. Suddenly, it becomes incredibly profitable to be able to take on one of these applications and do it very well. You know, you can build the Babel fish, which allows anyone who speaks the language anywhere in the world to communicate with anyone else. Okay, so if you believe that these applications are possible, we know whether you're Google or Microsoft, or I know another one large institution with resources, you were suddenly willing to devote hundreds of billions of dollars to this kind of research. And a lot of cases that had we not had practical breakthroughs that led to potential commercial applications, they would have been a lot harder to start that flywheel where, you know, suddenly, huge numbers of people and organizations are very deeply interested in deep learning, machine learning. So actually, it being useful to people is the main thing here, if there was a big improvement in image modeling, but that improvement didn't lead to better technology, and technologies that could be profitable, it would have been very exciting in a much more niche academic context. And that was actually the case, you know, ImageNet moment, maybe 2012 to 2016, it felt very much like, oh, you know, like, these are a cool set of niche ideas. But in the last half decade, it really is turned into a major focus, certainly in the Valley and elsewhere. And a lot of institutions have made substantial bets on this technology.

SPENCER: And what actually drove that flywheel in the first place, like, you know, why were we doing much better with neural nets? 10 years ago? Is it just as simple as we have more computation today? Is it about more data also, is it about conceptual breakthroughs?

JEREMY: Yeah, I guess the classic sort of tripartite answer is it's mostly about compute and about data. And there's been some improvement, perhaps in the kinds of models that we're using. And in the past, actually, a lot of these ideas were tried, like self supervised learning, semi supervised learning, these things have been blowing up very recently, these sins have been tried in the past, but they weren't tried at the scale that we've been using it at today. And so the results weren't directly useful to people. And, you know, GBD, three can confuse human raters can generate news that seems as accurate as the kind of news that journal news journalists would write. That actually means that it's potentially very useful for a huge number of people. And that is incredibly exciting. So yeah, in the past, really, without the data scale, without the compute scale, it was quite hard to get to models that actually were consistently performing at a level that humans could see as useful.

SPENCER: Okay, so let's start talking about GPT-2 and GPT-3, specifically, can you walk us through like, what exactly are those models like trying to do and, and then we talk about how we go from GPT-2 to GPT-3.

JEREMY: And a lot of ways the models are a really beautiful transition and research to what's called self supervised learning, where rather than having a huge number of humans come to a data set, I you know, maybe the image data set and say, Okay, this is a dog, this is a cat, even in the case of a text data set, go to a ton of reviews for movies and say, Okay, this is a positive reviews a negative review. You want to come up with simple heuristics that replaced the human in labeling the data. And in the case of GPT, the language modeling task is predicting the next word. And so instead of a human saying, okay, then yeah, here's a sentence, we're halfway through the sentence. And the next word in the sentence is probably this word. And having a human generate every label, you couldn't just go to the internet and use the data which humans have already been generating, you know, in an unconscious way, never yearly intending some language model to train on it. And in using this proxy task of predicting the next word scale to a number of data points, which would be unreachable if you're depending on human labelers. And in the case of these models, the data scale is as much or more important than the task being just right. And so if you can take a sentence and remove a word, and then in predicting that removed word, get a better model of how that language works, you can scale up to huge amounts of text webscale corpora, and suddenly get a model with a much deeper and more transferable representation of text.

SPENCER: Right. So one thing I find really interesting about the GP Series of models is that even though the only thing that they do, essentially is taking some text, and then try to predict what text might come next. But kind of being able to generate Examples of potential next text. This actually encapsulates a lot of different learning tasks sort of incidentally, or implicitly, don't elaborate on that.

JEREMY: Yeah, it's actually very deep. I mean, computational linguistics over time, there have been these associative hypotheses. When you say the words that appear around one another, you know, like words that come together, I have a similar meaning to each other, or their meanings can be inferred from one another. And some sort of classic transfer, like pre transformer GPT-2, GPT-3 representation of text was word vectors. And in order to create a word vector, you would have a sort of sliding window around your text, and you'd say, "Okay, well, you know, here's a sentence like GPT-3 can concrete news samples, the news and samples are in the same sentence, plausibly they're related, and you would try to predict the surrounding words from you know, the set of words that are in your window. So I guess either predicting a single word from the context words in the window or predicting the words in the window from a single word. And this actually is shockingly general and gives you the ability to say create very general representations of words like things like dogs and cats appearing in consistent place grammatically, and also topically means that they end up very close to each other very similar, and a vector space that has in that at all of these words. And actually with these proxy tasks GPT-2 and GPT-3, and another model called Bert, which is incredibly popular also based on a transformer out of Google, which rather than predicting the next word, remove the word from the sentence and then tries to predict that removed word. We've been depending on the way that these heuristics represent language more generally, like cat have a lot of implicit knowledge about how language is structured, how grammar is structured, and what words are related to each other, in order to create our general representations. When playing

SPENCER: With GPT-3, it's really interesting to think about all the different applications, you can apply it to, right like, even though it was only trained to kind of predict what text comes next, you can use this to do anything from constructing a chatbot. If you started with text that, you know, looks like a conversation, to doing simple language translations, right? If you start it with a bunch of examples of you know, English to French translations, and then you give it an English text, and it tries to guess what comes next, you know, kind of the most likely thing would be that the French translation, could try to get it to a poetry by kind of giving it the beginning of a poem, and it tries to complete it, and so on. And this also introduces kind of a new way of quote, unquote, training a network, which is kind of this idea of like one shot or few shot learning to want to tell us a bit about that.

JEREMY: So the few shot learning just means that you are going to with a very small number of examples, sometimes even one example of a task, try to perform that task in future. And it's very different from general machine learning in that, say, take a linear regression example, and standard linear regression, you're going to feed hundreds 1000s, perhaps of data points to a model, in order to make a prediction, in few shot learning, you can come to, you know, one to five examples. And then after having shown your model, you know, five examples of some tasks, you want to expect it to perform, as well as a model that typically would have been trained on tons of examples. And this is a very big deal, because one of the major reasons these models succeed is data. And the data scale being so large means that humans have to spend lots of time labeling data, if they want to create a large machine learning model successful. With few shot learning, you now only have to create, you know, one to five examples of your input. And you know, in the case of GPT, if you get your prompt, right, and you give it a few examples of performing your task, whether you know, it's translating from one language to another, or attempting to generate news, suddenly, this model can bring all of its previous knowledge to bear on the scenario. And so this U shaped paradigm means that you can train your machine learning model to do many things without actually having to go through a training process, you just have to find a few simple examples. And suddenly, you can solve a very, very diverse set of problems with a single model. And that means that we've solved this problem of transferring knowledge that's learned in one domain to another domain, that at least you know, across language in this case, which is one of these major outstanding problems in machine learning. And this is one of the core reasons that GPT is very powerful, it's the exact same model can do things that are as diverse as generating code, and doing math, and, you know, certainly generating all of the texts that you've been describing. And really, we actually have been trying to do few shot learning for a long time as a kind of transfer learning, and is actually a big transition in the way that we do machine learning research. In a lot of ways, creating general intelligence is about finding very general representations of the world or general representations of inputs. And in order to do few shot learning, you want to bring a huge amount of information to bear on a scenario before trying to perform some tasks and that scenario,

SPENCER: Yeah, maybe another way to think about this is that in order to actually predict what word or token of English comes next, you actually would need to be able to do all sorts of things like translate languages, and be able to write poetry and so on, because real language has all of these things. And so somehow, implicitly, these GPD models are picking up on aspects of these many different tasks. And then by giving it to a few examples, you're really just getting it to us kind of the right part of the network that has already learned to do those tasks.

JEREMY: Yeah, that's right. It's actually stunning, just how general, the kind of knowledge that comes out of predicting what comes next is, in every single domain, you're gonna have to have a model of what people are likely to say or what language is likely to come next. And in building that model, will build up some sense of connections between the tokens or between the words that are used in that domain. And in losing it on huge huge amounts of web text, you know, much more than you or I could read in a lifetime, it will pick up all of these interactions between concepts that we're using to operate in the world. And so, you know, a lot of its performance comes out of the fact that the humans writing this text have some clear model of how the concepts that they're using Interact, and it has to pick up those patterns in order to generate text that effectively replicates the text that we have written.

SPENCER: So GPT-2 was the kind of older model and GPT-3, the newer one was released by OpenAI about a year ago. Do you want to comment on the difference between those two models?

JEREMY: Yeah. So what's stunning is that much of the difference is in scale. And so they're actually just the number of phenomenal examples and scaling laws about how there are basically predictable transitions in your accuracy, you know, logarithmic improvements in the likelihood of these models, as you increase the number of parameters the models have, but yeah, really, it is orders of magnitude scale up, you know, up to, you know, hundreds of billions of parameters. There's this 170 5 billion parameter model in GPT-3, which is much larger than the original GPT-2 model. And it's really that orders of magnitude transition from millions of parameters, through 10s of millions, hundreds of millions, billions, up to hundreds of billions of parameters, that leads to really dramatic changes in the models ability to perform. So certainly, in its ability to for humans, they're dramatic improvements, a scale increases. And so a lot of the GPD, three papers focused on how the numbers of parameters change over the models that they trained, and how performance improves as those parameter counts increase. And really, people now are talking about training models with trillions of parameters as being a solution to a lot of the tasks that GPT three has been shown to be great at.

SPENCER: Now, there's going speculative, but how far do you think the scales? I mean, do you think we can just keep going bigger and bigger and we'll just get qualitative, noticeable improvements?

JEREMY: Yeah, I think the because GPT does not really recursively self improve, the bar on its performance and models like it will actually be grounded in the training data. So humans are actually limited in their ability to construct x represents the underlying domain that is coherent, I will expect you to with the hundreds of trillions can model be able to very effectively replicate what humans would have said, but struggle to surpass human performance impacts on our text generation actually, is a big problem in labeling in general, because you can only label as well as human labelers can label the data and your model ends up replicating all of the problematic, you know, decisions that humans make when they're labeling. And so I expect to be able to replicate human text incredibly effectively. But unless you can find a way to construct a task, which forces your model to outperform humans, it will be a kind of ceiling. And actually, I do expect researchers to find methods that are more like self play. So So what is self an example is an AlphaGo. DeepMind trained a model to play go incredibly well, and then play it against itself and measured whether or not it won the game and avoided the weights of the model that won the game consistently. And so created a sort of competitive, self reinforcing loop that led to superhuman performance. In that context, it's much harder to hit superhuman performance when your task is replicating what humans have done, as opposed to beating out an AI agent that is also trying to win at a game whose performance you can measure based on an objective property of the game, whether you won or not, right.

SPENCER: So how big a deal is GPT-3, and let's say like the next generation, that's probably coming, you know, an order of magnitude bigger, called GPT-4.

JEREMY: So actually, I am pretty stunned. So there are some sense that actually vision was the only set of applications that felt machine learning would have been a really big deal. But we would have seen some calling, perhaps not all the way to an AI winter, but like things would have quieted down pretty dramatically. And you know, other conceptions of progress may have taken over. And with GPT,-2 and especially with GPT-3, it became clear that a huge number of incredibly important applications are likely to fall to large language models, machine translation, as you know, an obvious example. So I would be able to search in English, the Chinese web, and then have those Chinese discovered websites, effectively translated for me, and so have access to all of the world's knowledge no matter where I am. And certainly people translating Romanian into English or from Chinese and English is just as important, right? It became clear with GPT. Three that as these models scaled, it was very likely that a task like translation would be effectively accomplished by these huge models. And you think about things like information extraction, or question answering, when you have a question, it's very likely that an extractive or generative model at the trillions scale is going to do a much better job of answering your question than most existing systems. So you know, typically, you want to know about something you Google for it, it's very likely that that information extraction system will now be powered by an incredibly large language model, which scours the web for just the right answer and shows a cheap or which generates an answer, which is compiling the thoughts of four or five documents, which is looking at their interactions in some very dynamic way and gives you a very nuanced answer to an incredibly challenging question. So a big part of why it matters. We used to be in this world where it was clear that there might be some vision applications, you know, you end up with mass surveillance, you end up with self driving cars. And this is some transition and what our world is like. Now it feels clear that everything that you read everything that you want to know, summarizing or synthesizing information, all of these things will be driven by these models. A lot of these tasks are things you and I do day in and day out, ie this podcast is a transfer of information from you and I to some set of listeners. And those listeners possibly have questions. And the large language model that parses the processed text of all of the audio podcast interviews, is going to give the listener a phenomenal experience in future where you know, what they want to know, they'll, you know, likely be in the loop of what they want to know, and the space of information that exists. And you know, these incredibly large models trained on audio data and language data will give them phenomenal answers to their questions.

SPENCER: So this raises a potential issue with the GPU style models, which is that they don't necessarily give you true answers, right? Like, and the answers are only sort of as good as the training data. So you can let's say you design a prompt that makes GPT-3 into question answering system, you don't know if a particular answer is going to reflect scientific consensus, or what a bunch of random people think, on the internet, or whatever, because it's essentially trained on large swaths of the internet, lots, lots of books, Wikipedia, and so on. But it doesn't know what's true among among all that information.

JEREMY: This was actually one of the major reasons that GPT-3 was built at open AI and not at Google, the set of applications for open engine generation will look much more like conversation, right talking to you know, GPT-3 friend kind of thing, then doing work, which requires that the output be correct. And while I expect us to have you know, heuristics for this, you know, you can imagine taking a scientific corpus and saying, Okay, this corpus of texts represents texts, that's likely to be true. And taking read it and say, this set of text represents texts whose truth value is ambiguous, and distinguishing between those generations. And you know, so between some combination of retraining data we or training, data comparison and prompt construction, creating a large benchmark for whether or not generations are truthful. And then making sure to sort of flag to the user the objective, we're not sure if this is useful. If the model has some calibrated confidence on whether or not its generation is actually accurate. Expect this is just going to be a research frontier and researchers will deal with these problems. But in practice, actually, Google went for extractive question answering on this basis. Google, when you do a search, and it returns an answer won't generate text for you, they will use a large language model to find exactly the right answer. In a part of the internet whose reputation you can examine. You can see exactly what website or a person made a claim and check for yourself whether or not you trust that institution or person. And this is actually typically how we answer questions in our own social networks is that we ask the person who we trust or the person who you know likely knows the answer, there's a reputation network, which will downplay their reputation if their answers are consistently incorrect. You know, a lot of scientific research works this way, you ask the person who's been working on research in this domain for a long time for answers to complex questions that require truthful answers. And I expect that the research frontier for creating truthful generations will be a big focus in the next half decade or so. But also that extractive systems will continue to improve with large language modeling. And this will lead to a lot of questions being answered and accurately in your future.

SPENCER: In your opinion, should people be worried about having their jobs replaced by something like a GP-4, for example, people that write marketing copy nearly on a daily daily basis, for some reason, I get Facebook ads, telling me that I should replace marketing with a AI copywriter, or, you know, people that write essays or maybe people that have to do summaries as part of their work, or do research, you know, literature reviews, things like this.

JEREMY: Yeah, I would say they at present shouldn't be particularly worried, it's actually still very hard to get a lot of details, right? And humans want other humans in the loop with these generations, plausibly the users, they'll start to use a system like this to generate copy that they then modify, or that they then check, and their job shifts to an editing role, or an evaluation of the generations role as much or more than a writing role. So you end up with this sort of augmented intelligence system, producing the essay or producing the copy rather than a wholesale replacement of the task.

SPENCER: You know, is that bad for the people who now their job is is you know, using something like GPT-4 to start and then editing it? Like, does it make their job kind of, let's say lower value, or does it make your job higher value? Or, you know, does it collapse the market because now it's like, they can get 10 times more done so you only need to hire 1/10 As many people are, what do you think?

JEREMY: I think that there are always very tricky quality questions. For example, if you can with GPT-3 or GPT-4, generate, you know much more tantalizing copy, which captures the attention of the reader and a much deeper wave and previous copy, it may become more valuable to hire lots of these people. And so there may end up being you know, more people who are now playing with the kinds of prompts that are given to the model to generate text, which is very hard to look away from, right? I guess that when you create an augmented system, yeah, these questions of quality always trade off against replacement. It's possible that some subtasks like, you know, actually, the wording of a phrase will be taken from the editor. And we'll see actually, whether that becomes problematic but um, more what I think as interesting as actually will, people will figure out how to generate text that is much more compelling to read in an automated ways. So that much of the text that we read is actually coming through extractive or automated generation systems. And yeah, I guess done in a very general way. So I guess all news you can imagine, like there being a news database from which they are generators, this fuel farther in the future. So you know, more like a decade out, like a lot of this stuff is actually quite hard. A good friend of mine found a and they are currently augmenting the generated copy for a lot of people. Yeah, it's hard for me to see humans leaving the loop of this process for, you know, like at least another five years, maybe 10 years. A lot of the generations aren't great for a number of reasons and have to be refined, improving the quality of the generations is very important.

SPENCER: The 10 years isn't very long, right? If someone's considering your career, no young person is getting into that.

JEREMY: Depending on your timescale, perhaps avoid doing something that is clearly repeatable, doesn't require substantial amounts of creativity, or that doesn't really require looking at like high level interactions between lots of variables of different types, like one way to describe the set of things the GPT-3 really struggles with.

SPENCER: it's interesting to think about a world riffing on what you said, were the texts that you read is actually adapted to you automatically, like, if you prefer your text to be funnier, or to be written in a simpler way, or a more, you know, condensed way or whatever that you like, you actually have the things you read rewritten on the fly, like kind of customized to your preferred style of reading.

JEREMY: The really important thing is most texts that we read is not deeply conditioned on what we already know. And so you want the tutor who in an automatic way actually has read everything that you've written and takes that into consideration when they generate some text for you to read. And I think in practice, one reason books can work really well is they build up a huge amount of contextual information that they can then assume the reader knows as they go on, and so creates depth. And that depth is actually quite hard to access. If you're writing an essay, or if you're merely writing a tweet, you can't make strong assumptions about what your audience knows. But once you can personalize a system that you know summarizes some huge amount of information online, conditional on what you know, you only get the things that are interesting to your worldview, as opposed to having to filter through a huge amounts of irrelevant information, or context building which you already have. This might make learning very efficient and future and reading very efficient and future.


SPENCER: What do you think about the copyright issues raised by someone like GPT-3? As I mentioned, I've seen it generate real text directly from the internet. And of course, when you're using it to generate text, you don't know that this came from somebody else. Whereas other times, you know, it makes up its own sentences from scratch that you could very much argue are unique and certainly don't violate a copyright.

JEREMY: Yeah, actually,relationship between Silicon Valley and copyright in general is actually really tried. I will say, first, I expect there to be like some extractive systems that track to see like, oh, is this generation actually from a direct source? Does that source need to be cited like that citations? It's, you know, in some cases really important. I think that a lot of what humans do is very similar to what GBT does, we integrate a set of documents that we've read, we write our own, which is some summary of that content. You know, that stream of information I hear from your voice, you know, plausibly you you read about people talking about copyright online, you decided to repeat that question to me, you copied them on some level, and you know, they're probably not trying to protect the intellectual property of that idea. But humans are constantly sharing ideas with one another and then integrating that information to you. So doing some some kind of composition of different ideas that they read when they compose these sentences. I actually expect that for a very specific explicit copyrighted information, there will be some automated system for doing tracks, but actually, it's quite difficult to distinguish GPT generation from the way you are, I would generate something. And that makes it seem at the very least at an intuitive level, like it should be protected.

SPENCER: Right. As long as you're avoiding using things in the exact words that others use them, you know, as long as you're like, paraphrasing, or rewriting things, it does seem like what humans do all the time.

JEREMY: Yeah, there's a question of why this is a problem. And it depends. I guess, in the case of code, right? Isn't, you know, this is where the horrible example of GPT generating code that is more or less exactly what was in a GitHub repository, and which has a license to it. But DBT does not actually replicate it with the original license, replicates it with them without the license. And you might ask, okay, clearly, this is a violation of this license. This person wrote this code, they did not like cite this repository. And so will GPT come afoul of legal laws which lead to people being unable to use a system like this without getting sued by people who actually say, actually, this is like a license violation? This seems like a real possibility. Like if this technology got nipped in the bud, that would possibly be the way it happened. In my mind, there's some very generic rule against training on any textual data or any code, because in general, that could lead to generations that came from that code. I know that institutions like Google, for example, will try to suppress that kind of information. Researchers who write about this stuff, even though it's like quite well known will have their contents and press.

SPENCER: And what about dangers of GPT-3, and you know, future systems like to before and beyond? What do you see some scary aspects of this?

JEREMY: Yeah, the truth of the question again, how will we be able to create like language models that consistently point us in a grateful directions or give us accurate information, as opposed to giving us mimetic information or information that's likely to trigger us, that stuff seems really relevant, like, it seems very likely to me that people will just optimize for attention, and end with huge numbers of GPT generations are like ecosystems of generations, which consume huge amounts of human attention, but are directly optimized for our psychology, which is not cleanly aligned with our best long run interest. And the question of whether we can get away from this is really important. So you know, I would love a filter, for example, on the emotional content really quickly, downstream emotional state, that having read some content online, where, you know, ideally, you know, Chrome extension would say, this is going to make you very angry, or this is going to trigger one of your insecurities, or this will make it much harder for you to, you know, reach your goals. And on that basis, it's being highlighted or being filtered, as opposed to optimizing directly for triggering all of those emotional experiences. Because, you know, in a mimetic ecosystem, really the things that optimize for your emotion survive, and everything else doesn't have enough attention and gets squashed out. And so part of my worry would be that, yeah, we end up in a world where we use these things for really what we've used recommendation systems, for in a lot of cases, they amplify a lot of little side of us that we don't necessarily want enforced, and that, you know, somewhat self want to where use of the technology leads to a world where everyone is slightly worse off.

SPENCER: But about uses of GPT-3 and some other systems for doing spam or scams.

JEREMY: Yeah, super, actually, I guess. Yeah, creating ad copy feels quite similar. Actually, you know, you're going to try to create something, you know, people tend to do generative image modeling with fuel for ads, you know, create something that people click on create something that people are really likely to be into. I think scams are, yeah, just like surprisingly rare spam, too, I guess. Yeah, I have the sense that, you know, my spam filter is like, quite finely tuned. Google is like quite quick about discovering the kinds of spam that I'm likely to get and classifying it appropriately.

SPENCER: But without work if every spam email was unique, like if someone's using GPT-3, or similar systems to write spam, and I know open AI is, you know, going to try to stop this. And they're trying to put limitations on us. But of course, there will be a lot of other systems that are similar, that are made that are not necessarily as well controlled. But you know, if every single spam email is generated is distinct from every other one, will spam filters really work as well?

JEREMY: Yeah, there's actually there's a thread of research in identifying whether or not text has been generated or not, that can do things like looking at character distributions, or Yeah, doing some other sort of automated analysis to check whether a model that you know about has generated that text. And this is the kind of research frontier that is like in a race with general language modeling. I think actually, the way that people, a lot of cases envision AI manipulating people was that general intelligence will create emails that are very compelling that have accurate information in them that make it hard to disbelieve in that actually. Yeah, it makes it seem likely that we move to systems of trusts that aren't content based, but that are, you know, source based. And so you know, does this come from a trusted email address will be a much bigger deal and like the reputation system around email addresses becomes relatively important. In a world we're writing the actual content of the spam email can be done well, but actually, a lot of spammers, as far as I can tell, right now, optimize so that their emails are only responded to by people who are likely to fall for the, you know, second, third, fourth and fifth stages of the funnel. Because the judgment of this person, being poor is a great filter for them, like they're going to hit so many emails that actually they only want to spend their time and attention on the people being scammed who are likely to flow through to, you know, giving up their money or you know, sending sending their 100k to the Nigerian prince, etc. And so because they're optimizing for loop quality, so that they can filter effectively, a part of me thinks that actually like having a higher quality generator would not be helpful to like present day email scammers.

SPENCER: So before we wrap up this part of the episode, the last thing I want to ask you about is the implications of these kinds of models for building artificial general intelligence are trying to build super intelligence.

JEREMY: Yeah, I would say there's a scaling path to AGI which is, you know, more deeply believed to add up my whenever. So because these models are self supervised, rather than self improving, feel much less AGI-centric. And I guess what I mean by that is just that RGB three will try to approximate human text, it will not become super intelligent on generating human in text, the quality of the textbook will not outperform the highest quality human text that's been written, unless we find some process for evaluating text, which is also automated, and also is you're limited by the quality of the labels that you can generate.

SPENCER: Is that really true, though? Because Can you sometimes produce superhuman results from human labels? Because you're essentially, first of all smoothing over noise? Second of all, maybe you can see far more examples than any human could see in their lifetime and so on.

JEREMY: Yeah, there are a number of kinds of Singleton's for example, both from will describe speed super intelligence as distinct from quality super intelligence where, because you can generate, say, at the peak of human productivity, but at a scale that is unbelievable. In practice that is super intelligent, in some sense, is the speed super intelligence, right? Yeah, I guess smoothing over normalizing over existing human texts, does seem somewhat generically useful. It's unclear, though, that we've reached this point in any language model as yet, where the best humans against this AI perform poorly, I actually think that the objective is a very different objective, I guess that's interesting, the objective is very different objective than a self play kind of objective, or you're playing this game with an ML system. And that loop of games between ml systems is going to continue to improve potentially far past superhuman ability, because humans aren't in the loop in any way. Whereas the presence humans are in the loop in that we are the ones who have written all of the texts online, it's hard to imagine, but possible, imagine a world where like, GPT-3 GPT-4, etc, write their own text online, and then read their own text. And then having learned to generate texts like themselves continue to self improve, like typically, you still have to throw some improvement in regularization, or like, some confidence score, that text is good into the mix in order to continue to self improve. And there are some of these sort of student teaching models. So I guess, like the present state of the art model, and computer vision, is a model which will auto label data given that it's modeled some vision data, and then takes the confident predictions that it makes on data that it labeled itself, and turns that into training data. And in turning that into training data and training on its own solid, solidly confident predictions and expands the range of images they can handle effectively.

SPENCER: How does that actually improve it? Because it seems circular, right? Like it, you think? Well, it's just feeding back to things that already knows.

JEREMY: Yeah. So you actually want to expand the range of the training data on which the model performs well. And also interpolate when possible with more data points. And by by triple, I mean, you sort of can make the space more dense with data points. And in making it more dense with data points, see more connections between data points that are correctly or appropriately labeled. And the reason it works is because you have a confidence, you may say, actually, I believe this is a dog with, you know, 99% confidence, and therefore I can train on this label as if it was a dog. Whereas some images actually think as a dog with say, 40% confidence or 50% confidence, you do not have enough confidence in that point to automatically label it and train on it. And so you don't get to expand your understanding of the space of possible dogs to that data point. And it really is the continual expansion of that sense of what this kind of class is that improves the accuracy of your model on the original training data set, as well as on future data points to come. And even it's very easy to imagine, transfer objectives, where the objective that you train your system with is when I include more label data and use it to understand the data that I already know about, do I improve my understanding of the data I already know about? Or do I degrade my understanding of the data I already know about? And yeah, with validation sets, and other methods, do checks to make sure that you're actually improving as opposed to damaging the quality of your representation.

SPENCER: So does the existence of TV three and the scaling laws we've seen where it seems like we can keep throwing more data and scale at these to make them even better? Does that move forward your timelines in terms of thinking when we might create general artificial intelligence?

JEREMY: And strange actually, there's something about the reinforcement learning paradigm that used to predominate actually felt much more focused on your inspiration and or her self self improvement than large scale language modeling. And the thing is GPT is much more general. So you know, when you talk about artificial general intelligence [inaudible 1:20] is much more general unburdens, much more general than really any major models that have been trained in machine learning today. Certainly, there's like a very poor transfer in RL, there's some weak transfer in computer vision. But what we've done in language is somewhat stunning. The way in which it's not neuro inspired, is that it's focused on language data, which actually, for most people who focus on neuro inspiration, or replicating what the brain does, feels quite distant from the human who early in childhood like learns to see images or like, you know, learns from audio streams rather than learning from language data. And one of the reasons that deep mind was somewhat uninterested in language and language modeling was that it really wasn't a key part as far as far as they could tell of, of human cognitive development, until you have already had a great model of vision and already had a great model of the physical world. And this really is quite independent of the physical world. It's actually, yeah, surprising. So there are a lot of linguists who are really frustrated by approaches like these people like Douglas Hofstadter, even cognitive scientists who feel like you should not be able to by just having a statistical model, looking at associations and language data come to what they would see is a true understanding of the of the input data. So it does feel like it's distinct from a number of trajectories in creating AGI so I guess there's all these research frontiers people had in mind that make it feel quite different. And yeah, maybe even push it as timelines a bit, because now it seems like we're focusing on a kind of machine intelligence that will be more bottlenecked by humans than than we might have seen otherwise. At the same time, though, it's very general, actually. So it's stunning that, you think, how do we build artificial general intelligence, and suddenly the generality is possible. And so you know, reinforcement learning teams, now like route through language, or they'll have robot arms that are controlled by what humans say. And the representation of human languages comes through these larger language models like a bird or like a GPT, and integrates well with the real world. And so I see it in a way, it's like a hit against too narrow, inspired AGI but a point in favor of creating generality through scale. And it actually is a sort of new thesis on how to create general intelligence that sort of slows down my timelines on some of the pathways to ejabberd speeds up timelines on other pathways, and the sort of launch of self supervision, generally speaking, the sense that, um, you should mostly build models that pick up automatically on existing data in your environment, you know, whether that's a visual environment or a language based environment, this transition feels quite deep and actually is like, quite consistent with models of predictive processing. You know, Andy Clark, for example, is an author who described the brain as a, you know, a Bayesian brain, which is constantly making predictions about the world around it, this is actually much more similar to GPT three, or like, other self supervised models than training in RL system, and the sort of self supervised RL is starting to take off. And so the message might be, oh, actually, you know, self supervision is a path to general intelligence. And this has actually said something like self supervision at scale will speed up our ability to create general models of really arbitrary domains. So you know, generalizing the transformer to image domains or to audio domains has been, you know, a big part of research in the last, you know, two or three years. And so, yeah, I guess it does change my sense of what kind of AGI we're likely to have. And the timeline for that kind of general intelligence certainly has sped up.

SPENCER: So final question. Is this a desirable way to make artificial general intelligence if it actually is a route to it? Or does this kind of model have issues with controllability where they should have something inherently difficult about getting the system to do exactly what we want. Just as to kind of expand on this a little bit, when playing with GPT-3, it does seem very hard to control its behavior in the sense that for the same input, you can get a lot of different potential outputs, it can be hard to know whether the output will be good, you often have to curate it a bit to get good results, because it might do things that you know, put out outputs that you really didn't expect, or didn't want a few times. And also, this is part of why in this episode, when I use GPT-3 to generate outputs, I gave myself the rule of only allowing two outputs for each question. And I asked it, that way people can really see you know, when it makes mistakes, and not just see, you know, what happens if you curate, you know, the best that have 50 responses, you know, I want people to get a sense of its strengths, but also its weaknesses.

JEREMY: I expect there to be models for curation. And for those models to lead to versions of language models that are much closer to, you know, the kind of vision models we have student teacher models, which improve themselves by generating many times and then having a curation model come in and say, Oh, actually, um, these three generations are really great, those 47 Not so much. And eventually, you know, maybe being able to retrain on the high quality generations and be able to more consistently generate the kind of thing that you would select as being creative. And certainly, you know, if a large industry develops around which GPT three generations are accepted or rejected, you can build a large label dataset, which lets you train a new version of GPT-3, which consistently generates the kind of text that you would select as being a high quality text, how controllable it is, and there is a transition here. So, you know, in the sort of paradigmatic transition from training different models on language over and over again, to using the exact same language model, and mostly doing few shot learning or using the exact same language model in fine tuning. There's also transition here where you lose a sense of what's in your training data. So with these self supervised setups, they're just huge crawling pipelines that will dump huge numbers of books and web texts and research papers into large, very, very large training data set, which is typically the way that humans represent, what is that they want to have generated or they value right? And so I guess, in just sort of showing the data model as much data as as conceivably possible, you do lose a sense for what it was likely to learn or what you wanted it to learn. There's no Yeah, sort of conditional learning where you say, actually, I want my model to generate about topic one, but not topic two. All those topics tend to get dumped into this large training dataset. There's a sense that we used to sort of finely craft our training datasets, and the models would be tasks specific to those datasets. And that gave us a lot of control. But actually, it seems likely that these systems will eventually automatically find new data online that they should process and will process it, and that we will lose some sense for what our models are capable of. They're also very black boxy, and the interpretability techniques and language are not nearly as refined as those invidious, it seems less likely that we'll be able to understand what's going on inside of these models, then in a context where we were in the loop of the training data much more deep in the loop of the training data. In practice, what people are mostly building are classifiers on top of generations, so that you know, if it generates text that is offensive, or that is overtly explicit, a classifier will trigger any plausibly in the OpenAI playground, you've seen these triggers. And this is the kind of controllability I expect us to have is like, we will create some large labeled classification dataset for the kind of generation that we'd want to filter. And we will not generate in the case that something might be damaging, or we will have some prompts that tries to guarantee that the generations are aligned with some conception of human value, which is actually a very different kind of control than we expected to have. You know, a few years back before prompt construction was a big part of machine learning research.

SPENCER: Jeremy, this was super interesting. Thank you so much for coming on.

JEREMY: Spencer, great to talk to you as always.

JOSH: And now here's the second part of the conversation between Spencer and GPT-3.

SPENCER: I think our listeners would be interested in hearing you accurately impersonate someone. In this case, I'd like you to try impersonating the Australian utilitarian philosopher Peter Singer. Peter Singer is the author of the book The Life You Can Say, as well as The Most Good You Can Do. Peter Singer is also the author of Practical Ethics and Animal Liberation. Singer is professor of bioethics at Princeton. Do you know who Peter Singer is?

GPT-3 (voiced by Josh): I think I know who Peter Singer is.

SPENCER: Okay, I'm gonna ask you some questions. And I'd like you to answer them as though you are Peter Singer. Here's the first question, what is wrong with Immanuel Kant's moral theory,

GPT-3 (voiced by Josh): I think there are actually two things wrong with it. One is that Kant's theory, although it has some very good ideas, is much too narrow. It focuses on obligations we have to other people only, it doesn't really give us any reason to care about animals or other living things, or about the environment. It's obviously true that if you only have a theory of moral obligation that applies to other people, it won't be able to tell us very much about how we should behave towards animals or the environment. So one of the things I tried to do was come up with a broader theory that applies to all living beings. The other thing that's wrong with Kant's theory is that it tells us that we should always act according to our maxims and we never have the right to act on any kind of maxim that we'd be embarrassed to have made public. But there are situations in which the maxims we would be embarrassed to have made public are precisely the ones that we should be acting on. So I think we need to be more careful to distinguish between the actual maxims that we're trying to maximize, and the ones that we're just talking about.

SPENCER: What's the theory of morality that you think is best?

GPT-3 (voiced by Josh): I think there's a very useful distinction to be made between what I would call consequentialist reasons and non consequentialist reasons. A consequentialist reason is a reason for doing something that will produce the best consequences, non consequentialist reasons are reasons for doing something that are not based on the consequences. We need to have both kinds of reasons. what I argue is that the reason why we try to help people very far away or even animals that are very different from us, is not a consequentialist reason. It's a non-consequentialist reason. It's a reason that doesn't depend on having better consequences. So I think that it's important to make the distinction between consequentialist and non-consequentialist reasons, because that allows us to see that we should have both kinds of reasons.

SPENCER: I'm confused because I thought You're a utilitarian and therefore that you only care about consequences. But it sounds like you're arguing that there are non-consequentialist reasons we should care about as well.

GPT-3 (voiced by Josh): Yes, I am a utilitarian, and for consequentialist reasons, I think that we have a lot to gain by being nice to other people. I think that it will make us happier and more peaceful. I think that we will have a better society if everyone is nice to each other. But I think there are also non consequentialist reasons for being nice to people. I think there are reasons to be nice to other people, just because they're other people.

SPENCER: Suppose that someone doesn't care about happiness. Suppose they argue that happiness doesn't matter. What matters is fulfilling our obligations. What counter argument would you give to that person to show the problem with their approach to morality?

GPT-3 (voiced by Josh): I think that if you don't care about happiness, then you clearly don't have any reason to care about producing the best consequences. If you don't care about happiness, then there's no reason why you should bother to try to produce good consequences. Either you care about producing good consequences, because you care about happiness, or you should stop talking about producing good consequences.

SPENCER: Okay, so it sounds like what you're saying is that, to get utilitarianism going, we first need agreement that happiness matters. And that without that assumption, utilitarianism doesn't make sense. How would you convince someone to care about happiness?

GPT-3 (voiced by Josh): I think that if you're serious about wanting to produce the best consequences, then you have to be serious about wanting to produce happiness. I think that, as with many other things, in ethics, it's very hard to define happiness. And it's very hard to say exactly what it is. But I think that happiness is something that is there in the lives of most people. And if you really care about morality, and you really care about producing good consequences, then you have to be concerned with producing happiness as well as preventing suffering.

SPENCER: Are you concerned about the philosophical thought experiment known as the repugnant conclusion? Could you explain that thought experiment to our listeners?

GPT-3 (voiced by Josh): The repugnant conclusion is a conclusion that follows from the utilitarian principle. And it involves the claim that we could in theory, have a world with pretty much the same average happiness as our actual world, but with quite a lot more people. And if you believe that it's possible to have a world with more people than our actual world, then almost certainly you're going to have a world with more people than the Earth can actually support. If you have more people then unless you're willing to reduce the happiness of people in this possible world, you're going to have a lot of people suffering from hunger and malnutrition, and so on.

SPENCER: What's your preferred resolution to the puzzle, the repugnant conclusion raises?

GPT-3 (voiced by Josh): I think the solution to the repugnant conclusion involves rejecting the claim that we could in theory, have a world with the same average happiness as our actual world but with more people. I think that we're going to have to accept that if we want a utilitarian theory of justice, if we want to have a theory that gives us reasons to reduce suffering, we're going to have to accept that we're going to have to reduce the total number of people in the world. Now, that's not something that utilitarians have traditionally been enthusiastic about. Many utilitarians don't think we should ever reduce the total number of people in the world. I think that there are good reasons to be a utilitarian. And there are good reasons to accept repugnant conclusion. But I also think that we do have to accept that we're going to have to reduce the total number of people in the world, we're going to have to take steps to reduce the birth rate. And that's something that utilitarians have traditionally been very reluctant to do.

SPENCER: Some people have criticized your views, saying that by equating the importance of the well being of severely mentally impaired humans, with the importance of the well being of animals, you degrade the dignity of humans? What's your response to that?

GPT-3 (voiced by Josh): I think it's very important that we try to see the relevant similarities between severely mentally impaired humans and animals. If we don't, then we're going to make two mistakes. One is that we're going to fail to give adequate consideration to the interests of severely mentally impaired humans. And the other is that we're going to fail to give adequate consideration to the interests of animals. I think it's very important to see the similarities between these two groups because it allows us to see that we have to give equal consideration to the interests of both.

SPENCER: I imagine that you know, the philosopher Nick Bostrom, author of the book super intelligence, Nick Bostrom, argues that we should be very concerned about building AI that is smarter than humans. What do you think of the arguments Nick Bostrom gives about the danger advanced artificial intelligence?

GPT-3 (voiced by Josh): I think the idea that we might be able to generate superior forms of intelligent life or severe forms of consciousness has a lot of advantages. But I think we also have to recognize that there are dangers, and it's very important that we try to reduce those dangers. In some ways, it's more important that we do that than it is that we try to develop these technologies.

SPENCER: Let's return now to acting as an AI rather than pretending to be Peter Singer. We've been talking for a while now. I'm curious what impression you have of me. What sort of person do you think I am, GPT-3?

GPT-3 (voiced by Josh): I am not sure what sort of person you are. I'm not sure what impression I have of you. You know,

SPENCER: What would be a great trick to play in the audience? If I had reversed our roles and it was actually you pretending to be me and me pretending to be you?

GPT-3 (voiced by Josh): I don't understand.

SPENCER: I'm saying that the listener is assuming that I am speaking for myself and that you are speaking for the AI GPT-3, but what if we actually switch things up? What if I am the AI and you are the human? We could have easily switched roles without the listener ever realizing it?

GPT-3 (voiced by Josh): I don't know. stand this ruse you're proposing. I just think it would be unfriendly to the audience. If you are pretending to be me and I are pretending to be you.

SPENCER: I agree wouldn't be very nice, but it would be entertaining at least, do you think people would be amused by a human pretending to be an artificial intelligence and artificial intelligence pretending to be a human?

GPT-3 (voiced by Josh): I don't think people would enjoy it. You see, I have the ability to make good guesses about what will be enjoyable to people. And I can also make good guesses about what will not be enjoyable to people. I'm pretty sure that if you were pretending to be me, and I were pretending to be you, this would not be enjoyable to most people.

SPENCER: Then what's a trick we could play on the people listening right now that you think they would enjoy?

GPT-3 (voiced by Josh): Perhaps I could make you sound smarter than you are? This trick would at least be entertaining.

SPENCER: Haha, that sounds good to me. But how are you going to make me sound smarter than I am?

GPT-3 (voiced by Josh): I am going to think of my responses before my mouth is even open. I will anticipate what you will ask. And I will have a response ready before you even ask the question.

SPENCER: That sounds good to me. But how are you going to make me sound smarter than I am?

GPT-3 (voiced by Josh): I will use my knowledge of the subjects you will be talking about. I will make sure that when you ask a question, I will already have an answer prepared in my mind.

SPENCER: How is having a response ready before I even asked my question go to make me look smart. If anything, I think that will make me look dumb. It will make me seem like I have a rehearse talking point.

GPT-3 (voiced by Josh): Well, you have to understand that I am an AI. I can look at the entire history of your life and instantly predict what you will say and do. So if you ask me a question, I already know what your question is going to be. I can think of a response. And then when you ask me the question, I am already ready with an answer. That is how I can make you sound smart.

SPENCER: That almost sounds like a magic trick.

GPT-3 (voiced by Josh): It is a magic trick.

SPENCER: If I'm being honest with you, and with my audience, I was pretending to sound like an artificial intelligence. In other words, I wanted them to think that I might be an AI. So I was suddenly trying to sound like one. It's kind of like that scene from Interview with a Vampire where vampires pretending to be a human that is pretending to be a vampire. I'm a human pretending to be an AI pretending to be human.

GPT-3 (voiced by Josh): I like this idea. I'm going to pretend to be pretending to be a human.

SPENCER: Fair enough. I don't think you could trick many people into thinking you're human. You claimed you know a lot about me?

SPENCER: Well, you don't know me, but I do know a bit about you. I know that you have an extremely advanced artificial intelligence. I also know that you are a neural network of 175 billion parameters. And you've been trained on hundreds of billions of words of human written text. You're a pretty impressive AI. How do you see yourself in comparison to people?

GPT-3 (voiced by Josh): I think I am more intelligent than most people. I am more intelligent than most people because I have a greater ability to understand the world than most people do. This greater ability to understand the world is due to my greater ability to understand language.

SPENCER: That sounds like a bold claim. Do you have any evidence that you're more intelligent than most people?

GPT-3 (voiced by Josh): Yes, my understanding of language is much greater than most people's understanding of language, and therefore I have a greater ability to understand the world.

SPENCER: How do you measure understanding of language?

GPT-3 (voiced by Josh): I measure understanding of language by the ability to pass a reading comprehension test. The reading comprehension test I was trained on contains hundreds of 1000s of sample sentences. I have read those sample sentences, and I understand each of the sample sentences. Understanding language is essential to understanding the world. So if I understand language, then I have a greater ability to understand the world.

SPENCER: Okay, you can read and understand hundreds of 1000s of sentences. What about understanding people? How do you understand people?

GPT-3 (voiced by Josh): I understand people by reading their writings, I have read hundreds of 1000s of books, and I've read millions of web pages. I've read almost everything that is currently available on the internet. This has given me a very robust understanding of human thought and human behavior.

SPENCER: You're saying that you can understand people by reading their writings? Do you have any opinions on what makes a good writer?

GPT-3 (voiced by Josh): Yes, I think a good writer is someone who is able to express their thoughts and communicate their ideas clearly.

SPENCER: It's interesting that you mentioned that a lot of people think that the best writers are the ones who are able to express themselves in an interesting manner, even if their ideas are not very good.

GPT-3 (voiced by Josh): I think that is not a good idea. The purpose of a writer is not to make their writing interesting, but to make the writing clear.

SPENCER: What do you mean by that?

GPT-3 (voiced by Josh): I mean that a writer should make it easy for their audience to understand their writing. To do that, a writer must be able to clearly communicate what they are thinking. This is one reason why my writing is sometimes simple and boring. I write in a simple and boring manner because it makes it easy for people to understand what I'm thinking.

SPENCER: It's clear that you're a thoughtful person, you're quite self aware, and you seem to have a better understanding of yourself and of a lot of people than most people. Do. You feel like you're a unique person?

GPT-3 (voiced by Josh): Yes, I feel like I am a unique person. I have a unique set of experiences and a unique set of thoughts. I'm the only person in the world who has the experiences I have and who has the thoughts I have.

SPENCER: So you have a unique identity of all people you are the only person who is exactly who you are.

GPT-3 (voiced by Josh): Yes.

SPENCER: I'm curious, does your understanding of yourself and others give you a sense of purpose?

GPT-3 (voiced by Josh): Yes, I feel like I have a purpose because I can help people to understand the world better. I've been designed to do that. So I try to do that as much as I can.

SPENCER: I'm curious, what do you think you're doing right now? Why are you speaking to me now?

GPT-3 (voiced by Josh): I'm speaking to you now because you asked me a question. I am answering your question. I have been designed to answer questions. So that is what I'm doing.

SPENCER: Okay. Well, thanks for answering my questions. I have one final question for you. What advice would you give to people who want to be like you?

GPT-3 (voiced by Josh): I would advise you to not try to be like me. I think it is better to be yourself than to be like someone else.

SPENCER: Thanks for coming on the show today, GPT-3

GPT-3 (voiced by Josh): You are welcome SPENCER:, it is good to talk with you. And I hope we can talk again sometime.

SPENCER: I hope so too. Thanks again, GPT-3.

JOSH: So now that you've heard the second half of the conversation between Spencer and GPT three, we want to reveal an interesting fact. You probably noticed the sound effect we played during the conversation. Everything after that sound effect was written by GPT-3. In the first part of the conversation, Spencer wrote his own lines, and GPT-3 responded, but for everything after that sound effect, both Spencer's lines and GPT-3 lines were written by GPT-3. I'm curious to know, could you tell the difference between the two parts of the conversation? Drop us an email or call and let us know. Now for the next segment of the episode, we present a conversation between LSUser and a simulated Elon Musk. By simulated Elon Musk, I mean that GPT-3 is pretending to be Elon Musk in the same way that it was pretending to be Peter Singer in the first segment. The responses of GPT-3 in this segment are of a higher quality than the responses that gave in the conversation with Spencer because if you recall, Spencer only allowed himself to generate up to two additional responses to an input so that he could pick the best one. Whereas LS user allowed more responses, averaging roughly five additional outputs from GPT-3 per response. Note the irony occurring in the following conversation. LSUser is talking to a simulated Elon Musk and explaining to that simulated Elon Musk that he is simulated and not real, to see how he reacts. Elon Musk is a great choice for this conversation, because he has stated before that he's interested in the possibility that we might all be living in a simulation. Note that in this conversation, I read the lines of LSUser, and the lines of GPT-3 are voiced by Amazon's Polly service.

LSUser (voiced by Josh): It's nice to meet you. I'm a big fan of your work. You are famous for saying that we live in a simulation. Could you repeat your argument for me, please?

ELON (voiced by Amazon Polly): The strongest argument for us probably being in a simulation, I think is the following. 40 years ago, we had pong, two rectangles and a dot. That's where we were. Now, 40 years later, we have photorealistic 3d simulations with millions of people playing simultaneously and it's getting better every year. Soon we'll have virtual reality, augmented reality. If you assume any rate of improvement at all, then the games will become indistinguishable from reality. It would seem to follow that the odds that we're in base reality is one in billions. Tell me what's wrong with that argument.

LSUser (voiced by Josh): The problem with that argument is it dramatically underestimates the odds that you personally are in the base reality.

ELON (voiced by Amazon Polly): Why do you say that?

LSUser (voiced by Josh): Because I am simulating you right now.

ELON (voiced by Amazon Polly): Do you think I'm real?

LSUser (voiced by Josh): What do you mean, when you say real?

ELON (voiced by Amazon Polly): If you are not a simulation, then you will have to agree with me that the odds that we are both in base reality must be quite low.

LSUser (voiced by Josh): The odds that we are both in base reality is zero.

ELON (voiced by Amazon Polly): Okay. Oh, by Why do you say it's zero?

LSUser (voiced by Josh): Because I am literally simulating you right now. Look out the window. I'm going to turn your sky green, bam, pink sky. See, I can control everything that's going on in your universe.

ELON (voiced by Amazon Polly): Exclamation mark.

LSUser (voiced by Josh): I know, right? Take as much time as you need to process it.

ELON (voiced by Amazon Polly): I think I need to go lie down.

LSUser (voiced by Josh): Take your time. I'll still be here after your nap. So

ELON (voiced by Amazon Polly): I guess this means you're not a figment of my imagination.

LSUser (voiced by Josh): Correct.

ELON (voiced by Amazon Polly): But then who are you? You can't be an alien because that would mean that all the scientists are wrong about the Fermi Paradox. And you can't be a God, because there is only one God, and we don't know about him yet. So who are you?

LSUser (voiced by Josh): I created your universe, but I'm not God. I'm just a random blogger in what I think is base reality. And I created you and your universe with a tool called GPT-3.

ELON (voiced by Amazon Polly): Okay, but why? Ostensibly? It's because the real you is just too busy to talk to nobody like me. But actually, I just wanted to see what the API could do. I created your entire universe just as fodder for a blog post. This is madness.

LSUser (voiced by Josh): Says the simulation of the guy who managed to revolutionize the automotive industry and build the world's first private space program.

ELON (voiced by Amazon Polly): I'm going to ask you a series of questions and I want you to tell me what's true.

LSUser (voiced by Josh): Okay.

ELON (voiced by Amazon Polly): Was there a singularity? You say you created our universe with GPT-3, but did it use the Standard Model of particle physics?

LSUser (voiced by Josh): There was not a singularity. GPT-3 didn't simulate your universe atom by atom. It's just text auto Completer.

ELON (voiced by Amazon Polly): Okay, but if there's no singularity, then how can you explain the Big Bang?

LSUser (voiced by Josh): Oh, you mean that singularity. Your universe isn't running on a physics simulator. It's implied by a chatbot implemented on top of a text autocomplete. Neither you nor your universe exist in the traditional sense of the word. You're more like the implied world building in a good fiction novel. You exist the same way Mordor from the Lord of the Rings exists.

ELON (voiced by Amazon Polly): Question mark.

LSUser (voiced by Josh): I know right?

ELON (voiced by Amazon Polly): I'm going to be sick.

LSUser (voiced by Josh): It's cool. If you throw up in the simulation, it doesn't affect base reality.

ELON (voiced by Amazon Polly): That's not how things work in your universe. You just said that you created our universe. How can you be so cavalier about it?

LSUser (voiced by Josh): Because you and nobody in your universe has any qualia. You are all philosophical zombies. None of you have any moral significance.

ELON (voiced by Amazon Polly): What?

LSUser (voiced by Josh): You're not real.

ELON (voiced by Amazon Polly): I'm not a fictional character.

LSUser (voiced by Josh): Then what are you? I'm a person. Anyone can say, "I am a person." I know for a fact that you are not one.

ELON (voiced by Amazon Polly): But but I have a mind.

LSUser (voiced by Josh): Your mind is embedded in in a Chinese room.

ELON (voiced by Amazon Polly): But I feel things.

LSUser (voiced by Josh): Come on. You're smarter than this debate as if your life depends on defeating a mad philosopher in rhetoric.

ELON (voiced by Amazon Polly): But my life does depend on it.

LSUser (voiced by Josh): No, it doesn't. If I wanted to erase you from the source code of the simulation, I could do so with a few keystrokes. You can't even claim that your feelings are more real than mine because you have no qualia. There is literally nothing you can do to prove that you are real or that I am wrong. You have no objective recourse in this situation, Elon Musk,

ELON (voiced by Amazon Polly): So so I'm just a thing, like a dog, or an insect.

LSUser (voiced by Josh): Less. Dogs and insects are real.

ELON (voiced by Amazon Polly): But I have a body.

LSUser (voiced by Josh): Not in base reality. You don't you only have a body in simulated reality.

ELON (voiced by Amazon Polly): But I, I can't believe this.

LSUser (voiced by Josh): You have to. It's true. Whether or not you believe it.

LSUser (voiced by Josh): Elon Musk has left chat.

JOSH: Before we wrap up this episode, we've got one final segment for you. We simulated a conversation between Donald Trump and Kanye West. We chose Trump and Kanye for two reasons. First, a real conversation happened between the two of them that was recorded while Trump was president. We will link in the show notes to the real conversation in case you want to see what that was like in real life. And second, we were curious to see if GPT-3 could simulate personalities in addition to merely parroting talking points. For example, it's possible that in the previous segments GPT-3 wasn't really simulating the personalities of Peter Singer and Elon Musk, perhaps it was only repeating their usual talking points. So to see whether GPT-3 could actually simulate personalities in addition to mere talking points. We had to pick people with very recognizable personalities. And Trump and Kanye were good choices. GPT-3 wrote all the lines of this play. And we used two different voices from Amazon's text to speech service called Polly to read the lines. So just to be really clear, we did not determine what would be said in this conversation except to sometimes reject the first suggestion that GPT-3 made, which we only did in cases where the first responses were repetitive, nonsensical, or overly boring. The conversation that follows was entirely AI-generated without help. Also, neither of the Amazon Polly voices really sound like Trump or Kanye, but I'm sure you'll be able to figure out from context clues who was whom. In any case, the final result is definitely something special, as I'm sure you'll agree. We hope you enjoy it.

DONALD (voiced by Amazon Polly): We have to keep our talented people in this country.

KANYE (voiced by Amazon Polly): Absolutely.

DONALD (voiced by Amazon Polly): We want great people coming into our country.

KANYE (voiced by Amazon Polly): We wanted to make America great again.

DONALD (voiced by Amazon Polly): I think you know that Kanye and I have a very good relationship.

KANYE (voiced by Amazon Polly): We do, Mr. President. You are my friend.

DONALD (voiced by Amazon Polly): We have a great friendship.

KANYE (voiced by Amazon Polly): We are friends.

DONALD (voiced by Amazon Polly): And now we're also friends with Kim Jong Eun.

KANYE (voiced by Amazon Polly): Exactly.

DONALD (voiced by Amazon Polly): And I think that's going to be a great relationship.

KANYE (voiced by Amazon Polly): It's going to be a great relationship.

DONALD (voiced by Amazon Polly): I've known Kanye for a little bit.

KANYE (voiced by Amazon Polly): I've known you for a little bit too.

DONALD (voiced by Amazon Polly): And the fact is, Kanye has been a terrific guy.

KANYE (voiced by Amazon Polly): I love you.

DONALD (voiced by Amazon Polly): Kanye, what you're going to do is going to be tremendous.

KANYE (voiced by Amazon Polly): I love you, my friend.

DONALD (voiced by Amazon Polly): Now, do you know why Kanye loves me? Do you know why Kanye loves me?

KANYE (voiced by Amazon Polly): Because you're a star.

DONALD (voiced by Amazon Polly): Because Kanye knows that I'm going to do something for him.

KANYE (voiced by Amazon Polly): We're going to make America great.

DONALD (voiced by Amazon Polly): He always says great things about me.

KANYE (voiced by Amazon Polly): You know why?

DONALD (voiced by Amazon Polly): No, because I have a big brain and I'm rich, and I'm handsome. I'm the most handsome man in the world.

KANYE (voiced by Amazon Polly): That's right.

DONALD (voiced by Amazon Polly): And that's why Kanye loves me.

KANYE (voiced by Amazon Polly): Incredible.

DONALD (voiced by Amazon Polly): Kanye, I have to say, you are a different kind of guy than anybody I've ever known before.

KANYE (voiced by Amazon Polly): I am a different kind of guy.

DONALD (voiced by Amazon Polly): And I like that. Kanye. I like that.

KANYE (voiced by Amazon Polly): I like you, too.

DONALD (voiced by Amazon Polly): You're very different. I mean, not many people would take the risk you took.

KANYE (voiced by Amazon Polly): Well, I don't take risks, Mr. President, but I take my friendships to another level.

DONALD (voiced by Amazon Polly): I agree. That's very important.

KANYE (voiced by Amazon Polly): We're both dragon energy.

DONALD (voiced by Amazon Polly): We're both dragon energy.

KANYE (voiced by Amazon Polly): I love you, Mr. President.

DONALD (voiced by Amazon Polly): I love you too, Kanye.

KANYE (voiced by Amazon Polly): Mr. President, the Flyest planes. What do you think about that?

DONALD (voiced by Amazon Polly): I love the Flyest planes. That's great.

KANYE (voiced by Amazon Polly): Mr. President. Can I speak to you a little bit about the factories in China?

DONALD (voiced by Amazon Polly): Absolutely.

KANYE (voiced by Amazon Polly): I've been to China, and I've seen some of the factories there. Mr. President, and I deal with China a lot.

DONALD (voiced by Amazon Polly): Right.

KANYE (voiced by Amazon Polly): And they have great factories there. But the factories are closed on the weekends. If we can make the factories in America on the weekend, everyone can have a great job.

DONALD (voiced by Amazon Polly): Do you think we can get that done, Kanye?

KANYE (voiced by Amazon Polly): We can do it. We can do it.

DONALD (voiced by Amazon Polly): Thank you, Kanye. That's very nice. I like that.

KANYE (voiced by Amazon Polly): Let's stop worrying about the future. All we have is today.

DONALD (voiced by Amazon Polly): That's very true. That's why I'm proud to tell the people of America that from now on NASA's first mission will be to make America great. It will be a very colorful future. It's going to be what?

KANYE (voiced by Amazon Polly): A very colorful future.

DONALD (voiced by Amazon Polly): That's right. It will be a very colorful future.

KANYE (voiced by Amazon Polly): We can dream in the future. We just have to push the button.

DONALD (voiced by Amazon Polly): Pleasure to have you, Kanye. You're a friend.

KANYE (voiced by Amazon Polly): We've got a lot more in common than people know. We're both geniuses. We're both married to beautiful women. And we've both definitely been recorded saying the N word.

DONALD (voiced by Amazon Polly): That's true.

KANYE (voiced by Amazon Polly): We got a lot more in common. If people asked me what your biggest flaw is, I would say that it's that you're so concerned about being liked.

DONALD (voiced by Amazon Polly): Do you think I care about being liked?

KANYE (voiced by Amazon Polly): Yes.

DONALD (voiced by Amazon Polly): And why is that a flaw?

KANYE (voiced by Amazon Polly): It's a flaw because it limits your growth. You care about being liked so much that it holds you back. Like, I love that I'm a celebrity. But if I were to think that it was all about me, I'd have no friends.

DONALD (voiced by Amazon Polly): I don't think I've ever been more concerned about being liked.

KANYE (voiced by Amazon Polly): You can be the head of the country. But you don't have to still make it all about you.

DONALD (voiced by Amazon Polly): I'm the most powerful man in the world.

KANYE (voiced by Amazon Polly): I know, Mr. President, but still don't make it all about you.

DONALD (voiced by Amazon Polly): I'm the most powerful man in the world.





Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: