June 1, 2023
What are large language models (LLMs) actually doing when they churn out text? Are they sentient? Is scale the only difference among the various GPT models? Google has seemingly been the clear frontrunner in the AI space for many years; so how did they fail to win the race to LLMs? And why are other competing companies having such a hard time catching their LLM tech up to OpenAI's? What are the implications of open-sourcing LLM code, models, and corpora? How concerned should we be about bad actors using open source LLM tools? What are some possible strategies for combating the coming onslaught of AI-generated spam and misinformation? What are the main categories of risks associated with AIs? What is "deep" peace? What is "the meaning crisis"?
Jim Rutt is the host of the Jim Rutt Show podcast, past president and co-founder of the MIT Free Speech Alliance, executive producer of the film "An Initiation to Game~B", and the creator of Network Wars, the popular mobile game. Previously he has been chairman of the Santa Fe Institute, CEO of Network Solutions, CTO of Thomson-Reuters, and chairman of the computer chip design software company Analog Design Automation, among various business and not-for-profit roles. He is working on a book about Game B and having a great time exploring the profits and perils of the Large Language Models.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Jim Rutt about the power, progress and applications of large language models.
SPENCER: Jim, welcome.
JIM: Hey, Spencer, great to be here.
SPENCER: I think, like me, you're a real lover of ideas. And I think there's gonna be a wide ranging conversation, we're gonna touch on a lot of different topics. But let's jump right into one that's really on people's minds these days, which is advancements in AI. And let's start with large language models. So do you want to set us up with a little bit about just the history of large language models? And then we'll talk about what is the significance of these kinds of technologies like ChatGPT, and GPT-3 and GPT-4, and so on.
JIM: Large language models (LLM) are a relatively recent evolution. They're one of the branches on the trees of the new deep learning neural net models, which really got rolling around 2016-2017. They use so-called transformer technologies. And essentially, what they are is taking a big input body of texts (which is called a corpus), running some software over that looking for essentially short range and long range correlations between words, and then compiling that into a very nuanced neural net. So that when you give it a set of words, it predicts what words would reasonably come after. And that's really all an LLM is. And there were some earlier versions in 2019 maybe. They were sort of interesting, but something strange started to happen as the sizes of the corpuses and the sizes of the parameters — that's the number of synapse equivalents in the neural net — started getting bigger. I first started seeing it in GPT-2, where it actually seemed to be more than just parroting text to be able to create halfway intelligible sentences. And when we got to GPT-3, it was like, "Holy moly." As something as simplistic as a feed forward neural net trained on short and long range correlations between tokens, now sort of feels like it can express fairly complicated thoughts; it can summarize chunks of text, etc. But it didn't do it all that well. But you could start seeing that this is kind of surprising. And then when they increase the size again by a factor of almost 10 to GPT-3.5 — this is what the public ChatGPT runs on — and then people immediately say, "Whoa, this is quite different. This is now able to [quote] "understand" documents well enough to be able to summarize them, to be able to rewrite things, and to answer questions. Now, one of the things we've of course learned is that, for well known things, it does a pretty good job. You ask it, "Tell me the facts of the life of George Washington." And it'll be as good as your American history book. If you ask it, "Give me a biography of Jim Rutt." And it'll give you one, but most of it will be just shit; it made up what they call hallucinations. And this comes from the way this thing is built: as a series of correlations between words. If the signals aren't there in a strong way, it'll just pick up something sort of plausible, but wrong. Another one I use as a test case, because I actually find that I'm on the edge of GPT's knowledge. It knows I exist. There's a bit about my podcast. It knows a bit about my business career, but not a lot. And I'll ask it a question like, "Alright, who were the 10 most prominent guests on the Jim Rutt Show?" And with GPT 3.5, about five out of 10 will be hallucinations. There'll be people that were never on my show, but are plausible. There are people that I could have invited as guests. And so, this statistical network of words produces a plausible but false answer and something you really got to watch out for. Then, the next step is ChatGPT-4, which is now out, but only on a limited release. I've had access to it since March 14. And anyone now can get access to it by paying $20 a month and getting ChatGPT Plus. And I also have, quite a while, had professional grade API access to the models, where I can write programs that ping them and do cool things and things of that sort. And ChatGPT-4 is another considerable step up; it hallucinates quite a bit less, but not at all. For instance, on my test question of 10 most prominent guests on the Jim Rutt Show, nine out of 10 are real guests, and sometimes 10 out of 10. I've tried it on other fringe facts, and it's much less hallucinatory, but not unhallucinatory. And it's able to understand, in its basic version, quite a bit longer texts. You can now input texts — through the API at least — up through about 6000 tokens, which might be on the order of 3,500 words, maybe a little bit more, and it will do a halfway intelligible job. It also holds its coherence much better when you ask it to write something. It used to be, if asked to write an essay, after about 500 words it kind of drifts off the topic and you'd miss telling what it might say. Now, it easily writes an 800,000 word essay; no problem. And the same is true for its coding capacity. GPT 3.5 is perfectly great for writing a 20- or 50-line Python program. When trying to write a major application, it goes off into the ionosphere relatively quickly. GPT-4 holds its coherence a lot longer. I haven't really probed it hard enough yet to see how long it could go, but I would guess it could probably write a 300-line program reasonably well. And so, that's sort of where we're at. It's a truly amazing technology, because it breaks one of the huge bottlenecks between humans and technology. And that is, we can now interact with technology via plain old language, something all of us are good at. You no longer have to be the kind of person that enjoys or is good at writing computer code. You can actually just talk at it, and let some techies figure out how to get the plumbing to the parts of the system that you want to interact with. But this is really going to change a whole bunch. This, I believe, is to be one of the most significant innovations in technology in a very long time.
SPENCER: Thanks for that background. Yeah, it's fascinating to hear some people saying that this is feeling to them like the moments when the internet started to blossom, like the sense of "Wow, there's so much potential here; there's so much that's going to change." And so I'd love to start digging into it with you. What is on the horizon? But I just ran that through ChatGPT for my podcast, asking who the most prominent guests were. And it's funny, because five of them were actual guests, five of them were hallucinations (people who've never been on the podcast). But in the hallucinations, a couple of them are people that I've been planning on inviting. So, it's not so far off, right?
JIM: Yeah, exactly! Because this is a very rich statistical angle, it has never thrown up completely nonsensical suggestions, right? The hallucinations are plausible. Let me start with the first part of your comment. One was that people are saying it's as big as the internet. I'm gonna go further and say it's bigger than the internet, probably. I've been suggesting it's as big as the emergence of the PCs in the late 1970s. And I would suggest that some of the other big hallmarks we often talk about — the internet and the smartphone — are actually both fairly obvious downhill applications of the PC; those are children of the PC. And while they added a huge amount of capability, they're kind of the same thing. I would suggest that LLMs are somehow different. They are much wider, they can actually move into the land of thought and creativity. And they move these artificial, highly structured impediments to getting access to the richness of technology will eventually be able to make that go away. And so my prediction will be (come back, hold me to it), this will be the biggest one since the invention of the PC in the late 70s. And as to where it's going, the models will keep getting bigger. There's already a GPT-5 underway. I'm not entirely clear where they're actually doing the model building right now, but they're probably doing it or at least for a GPT-4.5. And then also very interestingly, Open AI, the guys that did the GPTs, are starting to get some competition. Google has a product called Bard that's out. I think it's just a test mode. I've got access to it. There's a few others. It's also interesting that Google, long thought to be the great AI superpower of the world, Bard is a lot worse than GPT-4 and arguably worse than GPT-3.5. And that's kind of interesting and significant. Will open AI be able to use its first mover advantage to essentially be able to beat the superpower, who has long been thought to be Google? On the other side, there's a bunch of open source projects underway. One of them was announced yesterday. And I think this is going to be one of the more important ones. From Stability AI — the folks that provide stable diffusion, the art generative AI — they just announced yesterday that they released two true open source large language models where the software that creates the models and the corpuses (the text that went into them) are all going to be available on GitHub. Now, both of these are small models meant to be run on your own PC. But they also announced (and I happen to know this from conversations with them) that over the coming weeks, they will be releasing larger and larger models until the biggest one in their suite will be about as powerful as ChatGPT-3.5 (the current public free GPT-3.5). So you say, "Wait a minute, one level, it's nothing new in capacity. But it's actually hugely significant in that we'll have access to the source code, we'll be able to extend them ourselves, do what's called fine tunings, be able to turn off the ridiculous Anandi rails, which the big companies that have wrapped their products in, et cetera." So I'm putting my money down, in fact, probably literally some money to help fund some of these projects, that open source large language models will very quickly become a very important part of the ecosystem.
SPENCER: How concerned are you about these models being open source from the point of view of bad actors using them? Obviously, there's a lot of benefits to having open source models if you want to use things. But on the other hand, if you have these closed source models that are being monitored via API's, at least they can detect bad actors doing things and kind of try to shut them down, or at least kind of figure out ways they're using them badly. Whereas when it's open source, it's good if anything goes. So I'm wondering, do you worry about that, or not so much?.
JIM: Not so much. In fact, you may recall, there was this campaign of people had a letter to slow large language model development for six months or something. My take on that on Twitter was kind of like a mouse pissing on the blade of a bulldozer. It accomplishes nothing. And these things are coming, just deal with it. And we'll probably talk later about risk. And there are some risks, there are a number of risks. But we're not going to be able to stop these things. These technologies are not that hard. Despite the fact that they look like magic, the underlying code to create a transformer based large language model is out there. Anybody can do it. The two limiting factors are the corpus (the body of text) and many millions of dollars for the processing to create the models. But they're not that big. They're well within the range that a single crypto bro could fund one of these projects, that perhaps already has done so. So there's probably nothing you can do about it. So we're just going to have to deal with it.
SPENCER: And presumably, that price is also falling all the time as they make newer and better GPUs and so on.
JIM: Yeah, it's essentially on something faster than Moore's law, which is because the GPUs and tensor processors are going up faster than Moore's law, because they're highly parallel, good technical reasons why that's the case. So yeah, the cost will fall by a factor of two every year, essentially, for the foreseeable future. So we should assume that we'll soon have a model as good as GPT-3.5. It's reasonable to assume that within six months, we'll have an open source model as good as GPT-4. And probably a year from now, or 18 months from now, one that is as good as GPT-5 that's fully open source — the corpus, the weights, and even the software that creates the weights — and with the ability to do very powerful fine tunings on those models.
SPENCER: Yeah, my assumption is that the state of the art is gonna say something like between six months and two years ahead of the open source, but maybe not more than that. I'm curious, does that sound about right to you?
JIM: Yeah, there'll probably be six months ahead. Now, it is possible that somebody could decide to spend $10 billion to build a gigantic model, and that might put them ahead for two years. But it's also not entirely clear. In fact, Sam Altman, has recently said he believes that the return on size curve is starting to flatten out. To get the jump from GPT-3 to GPT-3.5 was on the order of a 7x or 8x increase in the size of the parameter set. To go from 3.5 to 4 was another factor of eight [inaudible] maybe a little bit more. So you're going up by a factor of eight in resources, let's say you're approximately doubling or two and a half-ing the capability of the model. At some point, a factor of eight is a big ass multiplier, you can't keep multiplying by eight too many times, right? Until you turn the whole earth into one large computer. That side, then the other one is probably, at some point, there's a peak to just how much you can gain from a statistical correlation model of language without adding other aspects that you need to actually have real intelligence. And in fact, the other important thing that people are missing, because these things interact with language and, so far, humans are the only things that have language, people are overreading into these models and what they are. People say, "Oh, yeah, they're sentient," or "They're conscious." No, they're not. They're simple feed forward neural nets, they don't have any logic, they don't change themselves in any way at all. They don't even have any internal loops beyond some very, very tiny local ones. But people will now then take these tools and include them into bigger systems. This is where it really starts to get interesting, where you take the models for what their expertise is, which is handling language, kind of like the Broca's region in the human brain. That's where we create our language. When we speak in [inaudible] area where we understand it, and then build it around other parts of programming. You've probably seen things like auto GPT, which adds in short-term and long-term memory. And there's other projects that are building other parts of cognition around the LLMs, so letting the LLMs do what they do best, which is the actual decoding and manufacture of language artifacts.
SPENCER: Right, so maybe it's worth unpacking a little bit. So we have plugins coming out for ChatGPT soon. And my understanding of plugins is that the AI model itself will be able to recognize that the query is not something that it can answer directly, and then it will be able to send an API request to some other system. For example, you can imagine it running a Google search or Bing search, or you could imagine it talking to the Wolfram Alpha system that could do math calculations for complicated math, or reaching to some database to pull out weather information up to date, and then answer your question. And so essentially, it's sort of augmenting itself by making these outbound requests for information from other systems.
JIM: Yeah, that's true, too. But frankly, I think even more importantly, and more interestingly, and this is where most of the really cool stuff is going on, is to create software that talks that to the LLMs, and then uses the responses to do its thing. So you're essentially driving it programmatically. And you can actually do more than you can do with the plugin model, but you're in control of how you're driving the LLM. In fact, I'm working on some software right now, it's a helper app for movie screenwriters. And it's amazing how the dance back and forth between external software and the LLMs programmatically do very interesting things. And I'm designing it with humans in the loop. So they do some curation. And that's where the most innovative things are going to come from, at least in the short term.
SPENCER: So the idea there is that you're getting the LLM to do some sub task, but then you're kind of piecing this together with other subtasks, which may be done by other LLMs, or maybe some of them but done by humans, or some of them done by regular software. And then they kind of all come together to make something greater than the pieces. Is that right?
JIM: Yep, exactly. And then the next big step will be where we include other parts of AI. We include symbolic AI, we include evolutionary AI, etc, with LLMs in the loop doing their piece, but these other parts doing their piece. And I suspect that the emergent whole will be fairly spectacular, even if LLMs are no better than GPT-4. GPT-4 is pretty damn good and provides a very useful basis for doing a whole lot of other things in AI space.
SPENCER: To help us better understand how this works, could you give us a specific example from the prototype you're working on?
JIM: Yeah, sure. Let's do that. First, I'll say it is truly a prototype. I think it's relatively unlikely I'll turn it into a product. I'm actually working with a professional screenwriter, so that I can have some domain knowledge on what we're doing. And just to give you a simple example of what we've got so far, the intent of this thing, at least in the first stage, is to be able to get to a full first draft of a movie script very quickly, say a 120-minute movie in a day or two or three rather than the three months to a year it takes a human to knock one out. Because according to the screenwriters I've talked to, that's the hardest part: doing the curation and the rewriting and the fine tuning of the dialogue and all that stuff. The screenwriters are good at that, but knocking out that 1.0 or 0.1 version of the script is the hardest part. So that's what I'm focusing on. So let's give you a very simple example. What I have running right now, as of yesterday, is you can write (what I call) a hint about what the movie is about. Let me actually do this live. Why not? (Always dangerous.)
SPENCER: Let's do it.
JIM: Let's do it. I'm gonna run the program. All right, here's the hint I typed in. I have a canned one for testing. A 19-year-old man meets a 39-year-old married woman and they both fall in love. A passionate affair ensues, which ends tragically when her husband kills them both. So that's the hint. So now I ask GPT 3.5 (in this case) to create a longer text to invent a movie, essentially from that hint. And it's doing its thing. It'll take a moment. Here we go. The movie opens with a 19-year-old man going about his mundane life. He's unhappy with the monotony of his life and longs for something exciting to happen. One day while at a coffee shop, he meets a 39-year-old woman, they strike up an unlikely conversation, and quickly realized they have much in common. As they become more comfortable with each other, they confess they both fall in love. Despite their age difference and their marital status, they decide to pursue a passionate affair. Their romance blossoms quickly. And they start to live a double life: they go on secret dates, stolen moments in each other's arms. — Now keep in mind, the only thing I gave it was a hint. It just made up all this shit. — Meanwhile, the woman struggles with guilt and fear of being caught. Meanwhile, her husband gives..blah blah..and goes on and on. So another 678 sentences. Then the next thing I added over the other day, it will now take this extended movie text and create scenes for the screenwriter to work with. I'm going to set it to make 16 scenes.
SPENCER: Right. So it's sort of like the LLM is operating now on its own output, right? So it produces this output of the general synopsis. But now it'll take the synopsis in and operate on that and generate scenes. So it's kind of recursing.
JIM: Yeah, exactly. And not only will it create titles for scenes, but it'll recreate short texts that describe the scenes. So each time, it does it slightly differently. So the scenes it created are: the mundane life, the coffee shop encounter, unexpected connection, confessions of love, secret dates, living a double life, swept away, struggling with guilt, monitoring her every move, confrontation, tragic and bodies found, sense of sadness, poignant conclusion, forbidden love, and regret and remorse. And it's provided texts about as long as the hints for each one of these scenes. And so the next level in the project will allow the screenwriter to edit the hint, which is a one long sentence or two short sentences, and then say, "Make a longer detailed version of this scene." And the LLM will do that itself, just as it did with the movie hint, turning it into the movie synopsis. And then the screenwriter will be able to edit that longer form and tune it just right, and then press another button and create the dialogue and action for that scene. And if they don't like the way it turns out, they edit the text slightly. Or if they want to go back to the hints, they may edit the hints that generate the text and generate the dialogue. And then the final piece — this will be the most interesting one, the reason I want to do this — is we'll also have a set of characters, and the characters will have personality attributes. I've already tested the LLM that knows about the ocean model, for instance (the Big Five psychology model of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism). And I've actually created a system prompt for GPT that will allow me to say create a character that has...I taught it to have a scale of 1-10 on each of those. I'd say, "O = 4, C = 6, E = 1, A = 9, and N = 4. And it will actually create a character with those personality attributes. It's pretty crazy. And then we'll also add in a volatile emotional model, probably using the so-called OCC model, so that we can describe in a state entered fashion, the current emotional state of each character and have that change through each scene. So, you can manipulate the emotional states and those will help drive the dialogue. So just making a change in, let's say, the anxiety level of one character, if I'm correct about this, it will fundamentally change the dialogue within that scene, because one of the characters will be more "anxious" than they were in the previous run. So that's an example of using LLMs to create LLM text to be fed back to the LLM to do something else, put humans in the loop, then have the LLMs take that to the next level.
SPENCER: Yeah, it's so interesting, because it goes from "Oh, I have an AI assistant," to "I have a swarm of AI assistants that can actually be helping each other with tasks and all operating together to accomplish some goal," which is kind of mind blowing. When people hear about products like yours, I've seen really split reactions. Some people get really excited, and they think of it as an enhancer of human creativity. For example, my friend Sam Rosen has been making, what to me is, incredibly stunning art with AI. They're just like really gorgeous stuff. On the other hand, I see a completely different reaction, where people say, "This is actually a total destruction of creativity," like creativity is about expressing something genuine and authentic; it's coming from you. And now having the AI right is a completely different thing. I find that this kind of point of view tends to come more from artists and writers. So I'm curious what your reaction is to that.
JIM: My gut reaction is that the people that have that objection have a grossly too high estimate of what human creativity really is. There is no magic black box, there is no secret sauce of how wonderful humans are. We're kind of like LLM 's. We're kind of like reflex arcs that we throw things up. I've written a fair bit amount of stuff. And I've talked to other writers who are professional writers. And we all acknowledge when we're in our writing mode that we throw lots of stuff against the wall and then curate it, "This is good, this isn't." And then we recursively fine tune it. Again, talk to any serious writer and they'll tell you that writing is mostly rewriting. And this actually corresponds quite closely to what I'm doing in the script writing helper. But I'm letting the LLMs do some of the rewriting and then that allows the humans to go there and fine tune it. If it didn't get it quite right, we're going to work on that. So I suspect that they are overloading substantially the wonderfulness of human creativity.
SPENCER: I was talking to a screenwriter about AI screenwriting, actually, quite coincidentally. And he was saying, "Well, you know, sure, it can make text. But the AI has never actually gone to war and had that experience. It has never actually given birth and had that experience." And I was thinking, "Yeah." But on the other hand, the AI has read thousands and thousands and thousands of accounts — firsthand accounts — of people going to war and giving birth and so on. And so in a weird way, it has sort of more experience to draw on than any given human.
JIM: Yeah, very good point. And, again, I think at least at the moment, the combination of letting the LLMs do what they're good at. I would just say it is mining literally trillions of characters of human text with all kinds of experiences, all kinds of times, and all kinds of points of view, and combining that with the human guiding it through hints. And I gotta tell you, the actual writing style of the LLMs is pretty good. And you could actually get it to change its writing style. For instance, I could add one short prompt inside my program to say, "When you write the movie synopsis, write it in the style of Hunter S. Thompson." It'll go quite nuts. You could also say "Write it in the style of HL Mencken." It'll write it in a quite different way. "Write in the style of Ernest Hemingway." "Write in the style of..." I don't know who's the current popular writer, but they may not know them. But some of the old classics, they can emulate the styles and be very radically different, like how long the sentences are, etc. And there's some other tricks I haven't even played with, in which you can create a completely synthetic style by what's called few shot process, where you actually load into the LLM...an example could be, "Write in the style of Jim Rutt." And then say, "Here's what it would look like. And then do two or three of those, and it will then [quote] "learn" from three examples of Jim Rutt text, sort of what the statistical attributes are of Jim Rutt writing. And then at that point forward, we'll do a sort of okay job of emulating my style. There's a lot more knobs on this thing than I think a lot of people realize.
SPENCER: So let's talk now a bit about what you expect to transform in society because of this tech. So what are some ways you see this being profound?
JIM: It'll eliminate one of the great annoyances of life, which is how crappy customer service has gotten, particularly about anything that requires somebody to think. I had to deal with a completely messed up experience dealing with American Express the other day, which used to have really high quality people doing their customer service. But over the last several years, they continue to go downhill like everybody else. And now they have mostly idiots, and mostly, idiots are fine for mundane stuff. But something sort of quirky outside the box happened and tried to get it resolved, and it was like a goddamn nightmare. They fucked it up, they screwed up our online login. It was like that dadadadada. And the reason is that the cognitive load of the problem was too high for a $17 an hour person, bottom line. On the other hand, my wife and I were kind of debugging what had happened over two days, we both agreed that GPT-4 would have handled it flawlessly. Dealing with idiots will become a thing of the past fairly quickly. And GPT-4 will take over a whole lot of customer service. That's at a fairly mundane level. At a higher level (and another project I'm advising some people on), I expect to see these LLMs become part of — again in one of these orchestrated multi-part systems — our first smart information agents. And there's a chicken and egg thing going on here, which is the existence of the LLMs is going to make having our own smart information agents really important really quick, because people can produce vast amounts of marketing spam and disinformation etc with these LLMs. Very quickly, we're going to need defenses against this. And I like to think of it very similarly or at least analogously to what happened with email in the mid 90s. Look there for a while 95, 96. The email would become unusable because of spam, right? When people moved to internet based email, the cost of selling and sending an email fell to a fraction of a cent, and the amount of shit just went up exponentially. And the filters weren't very good. But fortunately, there was a breakthrough in natural language processing which allowed relatively good identification of spam and it keeps getting better and better. Today, spam filters are pretty damn good. Maybe once or twice a week I get something that's true spam that breaks through Gmail's various levels of spam filters. But on the wider internet — on Twitter, on Facebook, on websites, on fake news websites, etc. — the LLM is going to be able to produce texts that people are not going to be able to detect were made by machine, and the sludge factor, the flood of sludge, is going to exponentially grow from this point forward. So we're going to need these personal information agents, just like spam filters, to be our interface to the information sphere to deliver to us in well-summarized form what's out there that will allow us to click through, to actually look at this stuff, to have it curated first by the AI for an ends. I was talking to somebody who's building an online system who actually has it up and running right now, a kind of small social media type platform. And I made the prediction that within 90 days, the cutting edge customers are not going to deal with any online platform that doesn't have an API that they can plug into their info agent. We're no longer going to want to roll around in the user interface of Facebook or Twitter or Reddit or something like that, because it will be way too full of sludgy spam from these LLMs.
SPENCER: Wow. So you're imagining it's just constant, smart spam coming from AIs. And then to fight this, people will actually interact with their smart agent to pull out the non bullshit, the non garbage, from that?
JIM: Yeah, exactly. That's what the world looks like. I thought about it afterwards, it won't happen in 90 days. But there will be examples of this out in six months, for sure. And probably in two years, we will be dealing with the infosphere by an extra level of indirection, which has both pluses and minuses, and will fundamentally change the business model. Because for instance, that we're already seeing in the last week, and I suspect this is in reaction to this vision. Both Reddit and Twitter have announced that they're going to start charging for their API's. They're still free right now at the personal level. But I suspect that that will have to change as well. Because one of the things you can strip out if you have an API call, say to Twitter, is (guess what) the ads. They are really easy to identify statistically. So the ability to support the services based on advertising will go away. Fortunately, the cost of computation and networks have come down so drastically, the costs will be quite low. Twitter's total revenue per active monthly user — and this was before Musk fucked it up, so this was prior to Musk — was about $1 per monthly active user per month. So to have as much revenue as pre Musk Twitter had, there was much revenue density, a $1 a month API subscription would be all it would take. For Reddit, it's even less, it's about 25 cents of revenue per monthly active user. So they could charge 50 cents to have access to the API into Reddit, and they double their revenue density. And I suspect that is where the world is going. So there's one thing to my mind, a huge benefit of this will be destroying the giant mistake that was made when the Internet became advertising supported and nearly double lots. I think that is close to the root of most of the evil that we see online. And I think this is going to be one of the great benefits from this info agent as the response to the flood of sludge that will come from the LLMs.
SPENCER: In the last month, I've already seen an uptick in spam. I've seen more spam getting both into my email spam filter and on Facebook. In fact, I started seeing these bots appear in my Facebook comments and I wanted to see if I could bait them. So I wrote a post that was just designed to attract the bots. And actually, I was able to attract over 100 bots to my Facebook post. And I was just like, "Ridiculous."
JIM: Because you mentioned crypto? That's all I got to do; mention crypto, you will get bots.
SPENCER: I'll read some of the stuff I put in my post to attract the bots. I say: crypto, Bitcoin, Ethereum, money, make profit, wealth, growth.
JIM: That'll do it. All you gotta do is mention crypto and they'll talk about all that shit. Yeah, you'll get 100 bots. I would say, oddly, Twitter seems to be currently doing a better job of fighting the bots than Facebook. I basically stopped using Facebook. I've not quite stopped, but as my listeners know, I only do social media six months per year. I take a break from the first of July to the first of January every year. And when I came back this year, I just did not feel motivated to do Facebook much. I dabbled with it. I thought, "Jesus, this is shallow as shit. Same shit different day. God dammit." And so I cut my Facebook consumption by at least 90% compared to what it used to be. And I probably amped up my Twitter participation by 25%. It was about 50-50 previously, so a net savings of 30% or so of my time invested in social media. But anyway, bottom line, as I do, look at Facebook enough to see these bot invasions; you don't see that so much on Twitter, surprisingly.
SPENCER: Yeah, well personally, I love Facebook because I use it as a place to write essays and then get people to give feedback on them. But it's a very unnormal use case for Facebook and it's kind of exciting that now Twitter allows longer tweets so maybe I can make use of more of that.
JIM: Yeah, though people kind of have to pay the seven bucks (or whatever it is) to get the blue check. And I'm a long winded son of a bitch as this conversation indicates. I paid my $7 to write long tweets. A lot of people don't like long tweets, and it's kind of countercultural. So if you want to use it for getting people to feedback on your essays, probably Facebook or even better Reddit would be the place to go. If you can find a spot on Reddit that has the community you want to reach, those people are happy to read long stuff and will give you very long winded comments in return.
SPENCER: Okay, so we've talked about a couple implications of large language models. Some other ones I want to ask you about: do you think that this is going to essentially replace search engines? Because, obviously, this has been in the air, people talking about whether regular search engines will survive this transition.
JIM: Well, regular search engines may not, but the LLMs themselves no time soon will replace search engines, at least not for the mundane cases of "What's the closest taco stand to where I am right now?" I will say there's some use cases because I'm already using LLM for...let's say you heard somebody mention something about some aspect of some philosophers' thought. And today, you could type it into Google and then start tracking down links and reading shit, etc. And it might take you 15 minutes to get some vague sense of what Hegel's thoughts were on some stupid or other topic. Today, I would go to GPT and say, "Describe Hegel's position on x and blah, blah, blah." They are very, very good for that kind of stuff, but not good for the mundane stuff, or the fringe stuff. For example, you ask for a bio of Jim Rutt or yourself, and you'll probably get total shit. You ask for the bio of George Washington, it'll be pretty good. And of course, Bing is taking the next step. And as they've wrapped their Bing search engine in GPT-4, actually. (At least they say GPT-4. I'm not 100% convinced.) And it is good for some things, but not for others. I use it a lot more than I ever used Bing before. But I still use Google for deep and subtle searches. It will impact the search industry, but it's not quite yet ready to put it out of business. But the combination of the two might. So I suspect within a year, every search engine of note will have a good quality LLM wrapper around it as an optional interface. But there will still be times you want to go direct, you want just a literal search.
SPENCER: Right. Sometimes you're just trying to find the website of a restaurant or store or something like that.
JIM: But more often than not, think about the things that your average normal person does, most of them are relatively little value add for having an LLM in the loop. But for some of the more subtle things, particularly things intellectuals do, I think you'll definitely see it to be a major use case.
SPENCER: These days, I do find myself using ChatGPT-4 for informational searches. But I find it's most useful when I can quickly verify the answer, like it gives me a proposed answer and then I can double check. But it's still a lot faster to get a proposed answer and double check that than it is to search 20 pages on Google and try to find it. I also find it useful in cases where it's okay to have some error rate, like where the goal is not to be 100% accurate. Like it's fine if it's wrong 10% of the time. But I imagine there are a lot of people working on how to improve the hallucination problem. Part of that might come from what you're describing, where it could also do kind of a regular search in the background, and so kind of combine the LLM output with the search output together. But also, I think citing sources, like being able to say, "This answer I'm giving you, you could also find a similar answer on this website." So even if it didn't come from that website, if you want to sort of double check it, you can go to that website and see what they say, or you can think about the credibility of that website. And so I'm wondering, do you think that they'll be able to solve the hallucination problem through these kinds of methods, or do you think it's kind of a fundamental deep problem?
JIM: I think they will continue to make gains on the hallucination problem. Whatever they did in GPT-4 substantially reduced it versus 3.5, but didn't get it to zero. And I do understand that, according to Open AI, that probably the next model will have attribution (where did these answers come from?) And interestingly, the Bing lashup of Bing search plus GPT-4 does provide links. It's not doing a really deep integration of the two; what it's doing is a Bing search first, and that feeds it to GPT to simplify it and turn it into nice prose. But the sources, I believe, are being passed in by Bing. What will be really interesting is when the models themselves have sufficient internal attribution capability to provide reasonable links to where the stuff is coming from. We believe that's coming. When? We don't know.
SPENCER: So let's transition to talking about some of the risks. I don't know if you and I have a similar assessment [inaudible]. But I know you think about five different types of AI risk. Do you want to kind of walk us through that?
JIM: Actually, I think I now have six. I've added a sixth one. So the first one, I think, is the classic well-known risk. I call it ‘Yudkowsky risk' for Eliezer Yudkowsky. Other people call it the AGI Paperclip Maximizer, or the Singularity. The idea being that, at some point, we'll get an AI that is smarter than we are. Let's call it 1.1x the horsepower of the smartest human on Earth. And so what do we do with it? We say, "Why don't you design your successor and make it smarter than you are?" And if it's able to do that rapidly, then we'll have one that's 1.5x as smart as the smartest human. Then we ask it to do the same and it's somebody that creates one that's three times smarter than the smartest human. And then again, recursively goes from three to nine to 100 to a million. And in the worst case scenario, in six hours, there's a new AGI created by a stack of predecessor AGIs that's a million times smarter than the smartest human, and it takes over the world and turns the world into a paperclip factory and kills all the humans. That's the so-called fast takeoff AGI singularity. I would say not zero risk of that, but zero risk of that coming from the LLM directly. And I'm very glad that people like Eliezer Yudkowsky and Max Tegmark, Bostrom, and the other guys that are out there are looking at the AGI paperclip maximizer risk. But it's still a ways off. I think the consensus view is that that kind of AGI is maybe 20 years away. There are some people saying that LLMs will accelerate it, and it could only be five years away. But it's not right now.
SPENCER: [Laughs] 20 years is not very long, even if that is right.
JIM: Yeah, that's true. That's why we really do need bigger resources than we have today allocated to it. The US government should have a major effort and should be funding independent research on Yudkowsky and risks. And we're not spending enough on it. [inaudible]. The second one, I describe as bad people or people doing explicitly bad things with the narrow AI that we have today. It suggests that China's use of facial recognition falls into this category. They're essentially building a close to iron clad police state by using narrow AI. And I'm sure our government is doing, if not exactly that, analogous things that are also not so good. And just to throw out one where LLMs will come into the loop, I expect the combination of LLMs with other things — particularly with cognitive science, models of human cognition, etc. — will soon be able to produce qualitatively more powerful advertising than has been created so far. And as a person who basically thinks advertising is pollution of the meme space, I do not think that is a good thing. So that's risk number two. Risk number three, I got this from Daniel Schmachtenberger. He puts forth, and I think he's right about this, that the emergence of all kinds of AIs — narrow AIs, proto-AGIs, and even things in the middle like LLMs — will serve to accelerate the current system, the status quo, what some of us call ‘Game A'. Game A will now start moving faster, there'll be more new products, there'll be better marketing, factories will become more efficient, supply chains will become rationalized. So all the things that Game A is rushing towards — which, by the way, will destroy the biosphere and drive us crazy, etc. — will happen faster. So it'll give ‘Game B' less time to take off to save the day. I think that's a real risk. On the other hand, I think Daniel and Tristan Harris, another person that advocates this risk, underestimate the countervailing power, which is that these technologies will also empower the periphery. And as I mentioned earlier, these open source versions of these models that are outside the nanny rails and outside the surveillance of these big companies, who almost certainly collude with the government, will allow people who are crafting the alternatives to the status quo to also use them. What the balance of adding acceleration to the status quo core while also empowering the periphery is, I'm not 100% sure. But I'm not assuming that the outcome of that battle is necessarily negative. It might be, but the takeaway is the periphery needs to get good at this stuff in a hurry. And from what I'm seeing, it's happening. All the good ideas for applications of these LLMs, in particular, are coming from the periphery. And we looked at the various technological revolutions, back to the PCs (we talked about the late 70s). Guess what, all the computer companies that existed prior to 1977, with the exception of IBM, were destroyed, basically. And then the memory computers came along, and the PCs destroyed them too. The internet totally changed publishing and destroyed the newspaper industry, etc. So the periphery won in many, many cases. And I think it's quite possible that this acceleration of Game A will be more than offset by the alternatives to the status quo becoming more powerful, more rapidly. Another risk, Forrest Landry, a guy I regularly have communications with and fed on my podcast five or six times, has a very interesting AI risk, which is that even if we never get to AGI, we never get to the smarter than a human AGI, because they're so cheap and so good at narrow tasks, they will gradually handover more and more decisions to the AIs. And humans will gradually lose capacity, in the same way that you hear stories, at least about people who no longer know how to read a map. I'm sure that that probably is a thing, right? And we know there are definitely people who can no longer multiply two-three digit numbers in their head, or even add two-three digit numbers in their head because they started using calculators at a very young age. And if narrow but very competent AIs take over more and more and more decisions and humans lose the capacity to make those decisions, where does that take us? There's a quite hilarious and scary movie called Idiocracy, which looks out 500 years in the future, where humans have degraded and become amazingly stupid. And part of the premise is the AIs are doing much of the work. But eventually the AIs kind of fail too. So there's a real chance there. Forrest then goes onto another more speculative risk, which is that this trend from risk number four that AI is gradually taking over more and more human decision making means there'll be more and more computers, and computer environments are not really friendly to the biosphere. So things like computer rooms, chip factories, power plants, etc. And so Forrest argues that gradually over a period of 100 years or more, more and more of our habitat will be sacrificed to build more and more AIs. I'm not sure if I buy that one, but I can see the argument. I'm not sure how hot-strong that risk is. And then the sixth one, and this one I came up with in conversation with Daniel Schmachtenberger, actually I think Forrest a little bit too. And that is that very quickly, we people will be living in at least one more level-deep simulation away from reality. If we have this info age — and I talked about interacting with the infosphere — that's one more level of indirection of actually dealing directly with another person. And that may not be good. That may mean part of the alienation of human life today, in part, is the fact that we are more and more levels of indirection away from reality. It may well be made way more than one. For instance, one of the things LLMs will probably be good for is building out the metaverse. One of the big barriers to the success of the metaverse has just turned out to be that it's very expensive, very time consuming to build artifacts in the metaverse. But if I can just type a few hints and say, "Create me a wonderful castle based on San Simeon, but do it in the style of a 19th century French whorehouse and give me a Lambo and a Bugatti." Press the button and that creates you this cool virtual space. That's actually not even hard to do. That's just a bit of [inaudible] work. We could be seeing the Metaverse and virtual worlds suddenly taking off by being easily populatable by using human interfaces via LLMs to universe or world builders. And again, we could find many people trapped in higher levels of simulation away from real life. And I would suspect that part of the problems we have with humanity today in our world comes from these levels of simulation and adding two or three extra ones is probably not going to be a good thing. So that's my six AI risks. I'd be interested to know what you thought. You said you thought we disagreed maybe. What do you think of those six risks? And what am I missing?
SPENCER: Yeah, I think that's a really good breakdown. One that comes to mind that might be missing is one that I've heard Paul Christiano articulate, and I'm not the best person to explain it. But the basic gist of it, as I understand it, is: imagine that AIs do more and more of the thinking in society. So, ultimately, they're doing things like making decisions for organizations, or automatically choosing advertisements to run, and things like this, that even if there's not a kind of fast takeoff of singular AI that kind of takes over the world, you can imagine sort of the world sort of being slowly taken over by numerous competitive AIs in competition with each other. So humans are sort of ceding more and more of the territory of what's happening in society. Until eventually, society is run by these, maybe they are corporations or whatever powered by AIs in competition with each other. But that also doesn't seem like a great long-term outcome.
JIM: That's, I would say, more or less identical to my number four: the gradual taking over of more and more decisions by AI. And yeah, let's add in the component that they'll be competing with each other. And that kind of gets to risk number three, Schmachtenberger's view that it will be accelerating Game A, which is all about competition; in particular, entities caught in so-called multipolar traps. But now, though, they're AI-powered. So I think we caught that one.
SPENCER: Okay, got it. Got it. Yeah, I took your number four to be more about humans losing capacity, like losing the ability to do things like we no longer write essays. But I was managing more of the control of society being ceded over to AI.
JIM: Yeah, I think I didn't focus on that. But that's clearly an implication of four. And I'm glad that you made it more explicit.
SPENCER: Great, awesome. Which of these six different risks that you mentioned — we've got the fast takeoff, Eliezer/Singularity risks, we've got the bad people using narrow AIs risk, we've got the acceleration of the status quo/Game A risk, we've got the handing over more and more of our capacity and control to these increasingly powerful AI's risk, we've got habitat destruction, and then finally we've got kind of this deepening of levels away from reality risk — which of those actually are you most concerned about and makes you most afraid for the future?
JIM: Truthfully, reviewing the list, I think they all are things we ought to be concerned about. We know that bad things being done with narrow AI are happening right now. And we need to be on our toes that we don't create a highly automated police state here in the West. The hell you go to a place like Great Britain, like London alone has 2 million CATV cameras, probably not yet hooked up to state of the art AI, but it won't be long. So we have to resist that. The acceleration of Game A by AIs, we're not going to be able to stop that. So I would suggest that we should work to empower the periphery. So the periphery needs to get good at these weapons, so that we can use them to accelerate the alternatives to Game A as well. In terms of gradually handing over more and more decisions to AI, that's probably going to be a fairly slow rollout, but it's one that could really disable human capacity over a period of years. How we stop that, I don't know. It definitely has me concerned; probably less concerned about gradually squeezing out human space for resource consumption for AIs. That's probably a risk that I don't think happens until you get Yudkowsky and paperclip maximizer. Forrest thinks otherwise and has a fairly articulate view on it, but I'm not too worried about that one. In terms of the levels of abstraction problem, I do think that's going to be very severe and fairly soon. So I would put that one pretty high on my list. So I can't give you one. I could kick out one, but I'd say the other four, we should be thinking about and considering what we want to do.
SPENCER: Actually, another one that comes to mind, and I wonder if this is sort of equivalent to the ones you've already given, is not so much bad people using narrow AI, but AI giving people a concentration of power, like we've never seen before. And these aren't necessarily bad people. They could just be CEOs of large companies. But now, a large company means not just $100 billion, or even a trillion dollars, it means like $100 trillion, like a company that does most of the cognitive labor in the world or something like that.
JIM: Yeah, it's possible, though, I suspect that it's not obvious to me why one company ought to be a single winner. I do think something we'll see. A venture capitalist I saw on Twitter made, I think, a very interesting point. It's that very soon, particularly in the technological type startups, the number of employees you're going to need is gonna go down a lot. So a good founder with a good idea, and one or two people that know how to use these technologies, may be able to build pretty significant companies. And so I think I would probably be more concerned about that, at least over the next few years. I am (aware) that there's some super powerful company that manages to take over all intellectual property development, for instance. But I think about it and I suppose that is possible. Somebody gets really, really, really good at building these tools and has the technical chops to build their own technology, kind of the way hedge funds build their own technology. Okay, how about that? Let's say a hedge fund for intellectual property creation figures out ahead of anybody else, how to write movies, how to write books, how to write tiktoks, etc. using AIs, and does it better than anybody else and it's way better than humans. Then, yeah, I suppose it is possible. I like that actually, now that I think about it, that there could be a runaway first mover advantage that allows one or a very small number of companies to utterly dominate the intellectual creation space.
SPENCER: Great, another thing to worry about.
JIM: Number seven: the possibility of first mover advantage spiraling to a giant single winner in a domain that we never thought was susceptible to that
SPENCER: Even a duopoly is not great, if we're talking about most human labor being done by a company. Alright, before we wrap up, I just want to touch on a couple other topics that I know that you think a lot about. So let's transition the topic away from Ai. Let's talk about this idea you have: "deep" peace. What is "deep" peace?
JIM: Ah, yeah, this is a new idea I had in late 2022. I was putting together a presentation about Game B for a potential funder. And I was starting to think about, as I will generally do before a presentation, what are some of the arguments against the ability to turn away from the status quo that would make it impossible to turn away from Game A? And unfortunately, I found one and that is, if we assume that Game A is rushing forward towards our planetary boundaries — as already over them in some areas, it will be over them in other areas soon — is already dominating our psyches in ways that are extremely unhealthy, etc. How do we change the nature of Game A so it's not a runaway train, and the Game B alternative (which we won't go into any detail here) presents a whole different way of thinking about it that could put us into a stable, non growth oriented economics for a long period of time where there's still improvement in our knowledge of the universe and our ability to do things but not accelerating our use of resources of the Earth and destroying the biosphere. But here's the problem I've found. Since 1914 at the latest — it was probably more like 1870 — power in war has been correlated fairly well with power in economics and power in technical innovation. And to the degree that countries are locked in war, or not even fighting any wars, but are caught in what we would call a multipolar trap, where they continue to need to spend on defense because the other guy is spending on defense. Think of the Cold War, where Russia and the United States never actually got into it, but actually we're spending considerably higher portions of our GDP even than what we're spending now. And so as long as the nations of the world are caught in a defensive multipolar trap, they can't back away from exponential economics. They have to keep their economies growing exponentially and their technological innovation growing exponentially as well, because if you've ever played games like Civilization or any of these other exponential growth games, if you fall off the exponential curve, you get eaten by the other guy. So before we can make that full transition, at a societal level, to Game B, we somehow have to go beyond peace. And peace is good, not fighting wars is the stupidest and worst thing that humans do, bar none. But that alone is not enough. If we never had another war, so long as everybody is caught in this multipolar trap of being worried that the other guy is about to start wars, and we have to continue to have a very significant defense capability, we can't back away from exponential economics. So we got to find deep peace. And what does deep peace mean? It means that the thought of warfare is no longer thinkable, that we have somehow had a sufficient change in our institutions and our human capacity, such that no one worries about war at all. And hence, the policies of various sorts don't have to maintain or to build their economies as big as possible to be able to have a credible and state of the art defense. And that is the simple idea of deep peace. And unfortunately, I don't have an answer to how to get there. And which is a little disturbing. And I would certainly hope that thinkers cleverer than myself might take on the mission of saying, how do we actually achieve this deeper perspective of peace? Because we are going to have to get there if we're going to save humanity from its own self destruction by an ever accelerating Game A.
SPENCER: That's really interesting. It seems to me that in order to achieve that idea of deep peace, it has to be stable to defection, right? If you have a kind of situation where one side benefits from instituting violence again, then it probably is not going to be very stable, ultimately. So yeah, just curious to hear your thoughts on that.
JIM: Yeah, that's your classic ESS (Evolutionary Stable Strategy) from a Game theory perspective. You always have to be resistant to invaders who basically break the social premise. And that's, of course, also what gets us into the multipolar trap problem, where if India is spending a lot on defense, then Pakistan needs to spend a lot on defense. If China is spending a lot on defense, the US has to be spending on defense. And so if we were to get to a world of deep peace, we would have to all be confident that we have eliminated the ability to be invaded by destabilizing people who are essentially cheating. And, you know, science fiction that you could kind of imagine this. Some of the folks I've talked to suggest that one of the prerequisites is going to have to be radical transparency. Let's say we do reach momentarily a period of deep peace. Part of that agreement might be that there are no more secrets, there are no more government secrets period. Any citizen could go anywhere at any time and look at any document and inspect any building, etc. and is allowed to blow the whistle and tell the world what they have found. That might be a minimum base around which one can build a deep peace that is not subject to an invasive strategy. And then there probably also needs to be some form of the equivalent of how foragers used to work. People in small hunter-gatherer backgrounds, if people tried to impose themselves as dictators on the small band, first people laughed at them, then they ignored them, then they exiled them, and if they came back, they would kill them. So we'd have to develop some sort of social immune system, so that in radical transparency, somebody discovers some entity that is violating the norm of deep peace, we'd have to have a series of escalations that we all agree in our heart of hearts are correct, up to and including killing them, if necessary, so that they can continue with their program. Those are the two parts: radical transparency plus a social immune system that can self-organize a response to crush such an invader before it gets any traction.
SPENCER: All right, final topic before we finish up. Tell us about what is ‘the meaning crisis'? I don't know if people have heard that phrase before, so maybe you could tell us what that is. And then can you tell us your thoughts on it? I know you have some interesting ponderings on it.
JIM: Yeah. Amongst the folks that I hang out with and in (what some people call) the liminal web space of meta modernism, Game B, regenerative agriculture, the comp, the commoning movement, etc. Many people talk about the meaning crisis as one of the afflictions of the status quo, and that the hypothesis is that many people are feeling alienated from their life and don't feel any purpose in their life. And that this has accelerated some of the other bad trends like consumerism. If you don't have any deep meaning in your life, or any meaning at all, then you're more susceptible to being manipulated by advertising to work harder and harder to buy more and more meaningless experiences. John Vervaeke is one of the leading thinkers on the meaning crisis. And his diagnosis is that the ending of the two worlds model by the enlightenment, where two worlds model mean we have our life here on Earth ( in the universe, the physical world) but there's also a transcendental realm (like heaven or hell or Valhalla, or various religions have various views of it; the Buddhists have a more subtle version of Nirvana and karma and all that stuff. There's a supernatural realm). Starting in the 1740s and moving forward, more and more of, at least the intelligent people, have basically rejected that two worlds model, and realized that we are in one world, and there is no transcendental additional realm. And while some people are able to accept that fine and dandy and move forward, others find it extremely destabilizing and alienating. There's considerable discussion about how much or how many. I often point out, there's still lots of people that believe in the two world model, probably a majority in the United States, right? Look at the polls. But amongst advanced thinkers, it's probably a much, much lower number. I've not necessarily fully bought into this idea of the meaning crisis. Because at a minimum, I point out, hey, meaning still works. When the sky gets light in the East in the morning, it means that in an hour the sun's gonna come up. There's really no meaning crisis at that level. In terms of another John Vervaeke distinction, in this two worlds model, we could think of people having a meaning of life. The meaning of my life, if I am a medieval Catholic (like in 1100 AD) is I should try to behave as well as possible in the short little time I have here on Earth, so that if I don't screw up, I can go on to perpetual bliss in the presence of God for eternity. That's a meaning of life that says that our purpose of being here is to not commit a mortal sin, die in a state of grace and go to heaven. And Vervaeke is a one world thinker, and he says, "We can't take that kind of thing seriously anymore. So we must rethink that and talk about meaning in life. What are the things that we do that resonate with us and give us a reason to get out of bed every day and do things." And his answer is kind of subtle and involves developing ecologies of practice, so that we can see the world more clearly, which he calls ‘relevance realization'. And that being able to deprogram ourselves from foolishness, as he calls it — and I sometimes call it malware — and allowing us to see the world more clearly will allow us to find meaning in our everyday lives. And he discusses an ecology of practice, including martial arts, meditation, psychedelics (potentially, though he's not a great fan of those), etc. that can help us peel away these artificial misleading bad things and allow us to find meaning in everyday life. And there's a long discussion about this by various people. And again, I've sort of been okay, sort of understand it. But I just recently heard another take on this, which I'm not sure is right, but resonated with me a fair bit. And in fact, I just did a podcast with Peter Wang, CEO of Anaconda, the Python software tools and data sciences company, and it will be published in a few days. And he suggested a radically different take on the meaning crisis. Yes, the two worlds issue was an issue, but that was quite a long while ago. And what he thinks of the extreme meaning crisis people seem to be in today — as exemplified by things like the high suicide rates, the high mental health issue rates, particularly amongst young people, et cetera — his very simple lens, which is really kind of gotten me thinking about this hard, is that our life that is more and more abstracted from the physical and the real, has left us with a huge amount of our decisions being entirely inconsequential. They just don't matter. There is no meaning to them. For instance, you're thinking about going out to dinner at a restaurant tonight. And you go through this discussion back and forth with your significant other about where you're going to go and why you want to go there rather than here and cetera. At the end of the day, it's a completely meaningless choice. The one I've long used as an example: you go into a big chain drugstore, and you walk into the shampoo aisle, and there's 200 varieties of shampoo, and people have gotten all kinds of valances about this shampoo versus that shampoo. Well, guess what, they all clean your hair pretty damn well. It doesn't really fucking matter. But nonetheless, part of our mental space is consumed by these essentially meaningless choices, in which you don't even get any feedback. You have to sort of invent your own rationale: "I'm a Clairol kind of girl," or whatever. "I buy the cheapest generic in from the refill shop, because I'm a parsimonious person," et cetera. And you have these kinds of false choices around very minimal consequences in terms of the real world. And that if we thought about our lives more, so that we had actual consequential choices, we would be engaged in a form of life that is much more like what we were evolved for. If you're a hunter gatherer, many of your choices are quite consequential. If a hunt fails for too many days in a row, you're going to be mighty hungry. And if it fails for a couple of months, you're likely to die. So where you move your camp to, matters. There's true meaning there. Whether you choose to gather or hunt during a given period has real consequences. And if we could reorder our lives so that we're making actually consequential decisions, we would have much more meaning in linkage between our own agency and our own existence. I'm not sure I did a great job explaining that, but that's essentially Peter's perspective. And those who are interested check out the Jim Rutt show in a few days, and you can hear him and I go on for about almost two hours on just this topic.
SPENCER: Yeah, it makes me think about people who have big projects that they're running or involved in, how that can make your decisions feel very meaningful, because you care deeply about the success of this project, and there might be a lot at stake in the decisions you make. And I wonder if that does give people a sense of meaning for exactly the reason you're describing.
JIM: Yeah, and I think it's probably one of the reasons why I personally, have never really resonated that much with the meaning crisis narrative. Because personally, I've always been involved in consequential decisions. I've run businesses, I've got a farm, I advise people on their businesses, I'm involved in scientific research, where people have to make decisions about where to put their resources. So I've been lucky. I'm having a life full of consequential decisions. But then when you step back and look at people who are more conformist and have kind of followed the traditional path through life and have some job, some company in some cubicle somewhere, where they really have very little autonomy, and they come home and they're locked in the consumerist thing, or the status game, etc. And I now finally can see it. The reason it never bothered me is I am always making consequential decisions. But I can also see that the way our society has evolved over the last 50 years, less and less people have had that privilege, frankly, and that is, at least, a significant piece of this meaning crisis.
SPENCER: So do you think AI is going to make that better or worse?
JIM: Let's think about that for a second. Will it make it worse? Well, at the moment, personally, I'm engaged in consequential decisions about how to use these things. So for me, it's actually empowering. But once they actually get out there in the world... Well, okay, let's think about this. One of the applications I posited earlier was that LLM and closely related technologies regarded to be very, very good at customer service. So if we get rid of a few million people in telephone customer service and have them do something more tangible in the world, that might be good. I think the fully generalized version of this is a concept called fully automated luxury communism. There's actually a book by that name, which envisions most of the non consequential jobs — what David Graeber calls ‘bullshit jobs' — will get automated over the next n years. And I would say, LLMs and related technologies suddenly make a number of these white collar jobs, like customer service that didn't seem like they were likely to be automated soon, such that they can be automated soon. But then comes the challenge. What roles do these people now have in life that are more consequential? And I think that is how the rise of automation could actually address the meaning crisis: by decoupling people from meaningless employment. But to actually land that, we'll have to find meaningful roles for those people in the world.
SPENCER: Right, because presumably, most people don't work in customer service or a call center because they love it and have a passion for it, and find it deeply meaningful. Usually, it's because that's the best job they think they can get, given their opportunity set, and so on.
JIM: Or least stressful or something. And instead, that suggests that we have a society where those folks were encouraged to work in the local food industry, for instance. Suppose you change the economic playing field such that local organic food was more competitive than it is today against Walmart industrial agriculture. And instead of 1% of people being involved in farming, it was 20%. I will say, I've never met a farmer, and I know lots of them, including myself, who ever suffered the meaning crisis, because every goddamn decision you make on the farm is consequential, right? Is it too early? Is it too late? Is the soil too wet? Is it too dry? What's the weather going to be in two weeks? Will the pests eat this crop? Being a farmer is about as consequential as you could possibly imagine. So moving people from a call center customer service to local agriculture, I believe would substantially increase their meaning in life, if not their meaning of life.
SPENCER: That's interesting, because then we start thinking about, well, how do these decisions actually get determined? It's not like there's some altruistic all knowing dictators like, "Yes, we need to put more people into local farming." It's like this is all coming out of a whole bunch of interactions of people in competition with each other and individual choices and what for profit companies are doing, what nonprofits are doing, and so on. So yeah, I wonder, if it turns out that AI does displace a huge number of jobs, where do the new jobs come from? In history, we've had a bunch of technologies that have removed lots of jobs. And new jobs often came about through technologies in other ways. But yeah, where is this next set of work coming from? I wonder.
JIM: Yeah, I will say that's part of our Game B thesis. We have to think about life differently. We live slower, we live a less materially rich life, but a life that provides a role of dignity and meaning for everybody, irrespective of their biological, familial, sociological endowment. And I will confess, that's easier to say than it is to do, but at least the Game B world understands that we have to do that.
SPENCER: Jim, thanks so much for coming on. It was a great conversation.
JIM: It was very good. I really enjoyed it. You asked great questions, and these are some of the most important topics confronting humanity today.
JOSH: A listener asks: Do you have any anxiety about climate change? And if so, how do you manage it?
SPENCER: It's funny that this is being asked of me today because I just recorded a podcast episode today on how bad will climate change be. I feel a small amount of anxiety around climate change, but not that much. And I think part of the reason for that is that I'm more worried about other potentially really horrible things for society. I worry about bioterrorism. I worry about viruses that could be worse than COVID. I worry about threats from AI. And so, maybe it doesn't leave me that much extra worry for climate change. But no, I am concerned about climate change. It's just I would put it not in the top two societal concerns I have.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Host / Director