CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 039: Knowledge Management and Deugenesis (with Jeremy Nixon)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

May 6, 2021

What is "The Index"? What are some benefits of externally compiling and organizing one's knowledge? When is spaced repetition useful? How can we co-opt our visual systems to boost memory? Would we all be more interested in producing an external personal knowledgebase if we could feel on a visceral level how much information is constantly being forgotten? How and when should we move up and down the ladder of abstraction? What sorts of problems can be solved by simulation? What is a generative model (as opposed to a predictive model)? How can constraints improve creativity? How useful are credentials as a guide to how much a person knows and whether or not a person is "allowed" to have an opinion on a topic? What do credentials actually signal about a person? What are "fox" and "hedgehog" thinking? What is deugenesis?

Jeremy is the founder of Consilience, an immersive information retrieval company. Previously he did machine learning research at Google Brain and studied Applied Mathematics at Harvard University. Jeremy works relentlessly towards aggregating knowledge, acquiring knowledge, and creating new knowledge. Find him on Twitter at @jvnixon or email him at jnixon2@gmail.com.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Jeremy Nixon about the development of evolving knowledge repositories, abstraction and emergence, the impact of information processing on perspective, and the benefits of visualization.

SPENCER: Jeremy, welcome! It's great to have you on.

JEREMY: Absolute joy to be in touch, Spencer.

SPENCER: The first thing I wanted to ask you about is 'The Index.' What is 'The Index'?

JEREMY: 'The Index' grew out of a reasonable sense that every aspect of my life, especially the important parts of my life — like execution and learning and making plans — would come with reflection and with some accountability, quality standards, and also over time, would build on itself. So I started writing documents about how to learn effectively or about how to execute effectively which I would go to and follow when I was executing on some plan of mine. Over time, I built up models of so many subsystems that it ended up being a comprehensive take on how to live and I tried to collate all of these documents in a single place called 'The Index.' A number of friends of mine compelled me to do this. Fundamentally, the documents were for me. They were for basically training myself and improving myself over time in a way that's external, that would sit in that external brain which would be a huge help to me in refining my thoughts. As somebody who was quite creative intentionally, I would just start adding all of these different documents on decision-making, on creativity, on habits, on introspection, to a single place which basically would be my external brain that I would come into contact with whenever I was trying to improve.

SPENCER: What I love about The Index is it's an externalization of your beliefs about many, many, many important topics but it's also a living organic document. Unlike a series of blog posts where you probably would write them and then just never touch them again, it seems like you're actually updating your views and, as you update them, you're updating The Index along with it. Is that right?

JEREMY: Yeah, that's absolutely the idea. The concept of Evergreen notes has become somewhat popular lately. Andy Matuschak, for example — who has created an incredible blog — has a number of notes that he keeps online that are constantly being updated. But in my mind, you do, over the course of your lifetime, end up doing a tremendous amount of learning which can drop. I guess that, in part, I was horrified at all of the things that I had learned about how to do things effectively, which I didn't bring to bear on the actions that I was taking moment by moment, and which you didn't get fed back where I would (say) learn something about a decision that I made that had gone terribly, and in the absence of having a decision journal that reflected on those decisions, would just make the exact same mistake over and over. Actually, at a moment when I was horrified, I guess I put together a group of people — mostly Stanford students who are interested in the same idea — so that we could collectively ask ourselves what mistakes we had been making repeatedly and create some accountability system for not making them again. There is a sense that it's very easy to go through life doing things that are actually predictable to yourself, if you're willing to be metacognitive for a moment and think about what's been going on. There's a level of external social evaluation that can come to your own understanding of how you should learn or how you should act or create that is very easy in writing but that's next to impossible in language or in conversation. So enabling all of those things were huge upsides of building this comprehensive document.

SPENCER: Just to step back for a moment, I want the listener to imagine that every time they learn something new about (let's say) decision-making, they go update this document that reflects their current best understanding of decision-making. And whenever they learn something about values, they go update another document that reflects their best understanding of values. Then all of these documents on all these important topics are all linked together to each other, forming a web — because that's basically what Jeremy has created — this web externalizing his current understanding. Where can people find this if they want to check it out?

JEREMY: This is pinned to the top of my Twitter feed. It's called The Index and has remained pinned there for the last year. There is a website that links jeremynixon.github.io where you can also find it.

SPENCER: I think I do something that's a little bit similar but with a different approach, which is that basically, when I'm thinking about a topic, I'll eventually try to write an essay about it which is my way of deepening my understanding, putting together all the pieces of information I've learned and trying to generalize them, and extract the lessons from them. But it does have the unfortunate side effect that it tends to be a static document. Then I'm on to the next topic. So I really like your approach. But I'd love to hear some of the benefits you feel like you've gotten from using this approach.

JEREMY: I would say that my documents, in a lot of cases, have turned into externally useful documents for other people as well. Someone who wants a synthesis on how to learn effectively — in autodidacting, for example, I have this Inspired Autodidacts group where we try to basically give ourselves accountability on learning challenging technical material — so people who are learning physics or machine learning or something that actually requires a lot of effort will go to a document that I've created and use it as part of their foundation. So it feels like I'm actually helping other people. But for myself, when I go to learn something — I guess an easy example is when I was reading the deep learning textbook as a part of attempting to get into one of these major general intelligence research orgs — the techniques that are in that learning framework around deliberate practice, basically identifying the subset of the material that I struggle with and working on those problems over and over again. Or the idea that I would write down everything that I had read where my own writing of the text would basically be an externalization of my understanding. So at the end of the book, I basically had 150 pages of my own representation of all of the concepts and mathematics and technical ideas from the book.

SPENCER: How would you do that? Would you read a page and then you'd write that page? Or what do you do?

JEREMY: I basically try to do two to three paragraphs so that my memory, as I'm reading, is forced to hold all of that content simultaneously. And then I have a document where I try to write down everything that was in those three to four paragraphs. Say I've learned something about recurrent neural networks and the way that they represent information differently than convolutional neural networks and details of their different implementations or their variations, I will write down in as much detail as I can what I believe I read. It's true that it takes longer to write down what you read. But once you've written it, it's obvious to you what subset of the content you read that you knew and what subset you didn't know. Because as soon as you look back through the text, you see all sorts of things that were dropped and forgotten. So you see exactly where the holes in your memory are and it also feeds back. So as I'm reading, I'm expecting to have to, as completely as I can, reconstruct what it is that I've read and that expectation forces me to read with a level of detail and in an integrative way that tries to tie the sets of concepts that I've read together in a way that feels coherent and easy to remember. Techniques like these definitely come across the listeners' view from time to time. You'll hear things about how to learn more effectively but it's really hard to remember all of them when you're trying to internalize some knowledge which you expect to be useful. For me, I was going to implement all of the algorithms in this textbook in a machine learning library which I would use to both demonstrate my knowledge, but also learn how to come up with novel algorithms and really deepen my understanding of what has worked and not worked in the past in creatively generating new algorithms like these. I had this really excellent resource though, to remind me of all the best things I'd learned about how to learn.

SPENCER: Would you go back to that before learning sessions and review your 'learning how to learn' document?

JEREMY: Yeah. At present, I have a textbook reading habit and I go back to it weekly, if not bi-weekly. It's called 'Compressing Content on How to Learn' and my Twitter handle is @JvNixon. If you're interested in seeing the 'Compressing Content on How to Learn' doc, it's in The Index.

SPENCER: How often do you update it?

JEREMY: Basically, weekly, monthly, depends on what's going on in my life. But at present, Inspired Autodidacts is doing these weekly meetings. At our last meeting, someone was talking about spaced repetition — I used to be a pretty heavy Anki user. I built a number of decks where each deck represented some texts —

SPENCER: Could you explain spaced repetition?

JEREMY: Yeah, of course. The concept behind spaced repetition is that there's a forgetting curve where, over time, you forget things in a way that is progressively worse. So if you're reminded of them at the right point in time, you can actually install them in memory for much, much longer than if you were not very intelligent about when you reminded yourself of some content. One way to learn things is to review it every single day. But researchers of memory realized that the rate at which you forget actually should inform when you re-study something. So they created a set of algorithms that would describe the way in which you forgot, and that would prompt you at the most efficient time for remembering all of the content that you were trying to remember. The idea of spaced repetition is you space out the repeated experience of whatever the concept is that you're learning so that you basically install it in long-term memory but at minimal cost.

SPENCER: So you learn an idea today and then maybe you get quizzed on it tomorrow. Then if you get it right, maybe you quiz a week later. But if you get it wrong, maybe you're quizzed immediately after and so on.

JEREMY: That's exactly the idea.

SPENCER: Have you moved away from using spaced repetition?

JEREMY: I learned a number of new things about how to do spaced repetition which I added to this doc but, basically, what I had been frustrated by was the challenge in converting various kinds of content into things that were worth repeating. Often, ideas would basically stack on each other or interact and the higher level conceptual content was hard to turn into a flashcard. Most of the spaced repetition systems are very flashcard-like. In my mind, converting an interesting concept in machine learning into a flashcard... Say it's the bias-variance tradeoff for the idea of inductive bias and all of its consequences. Usually, these are hierarchies and they are structured to the knowledge. I don't know how to say, "Here's a concept of inductive bias and here are a set of examples of inductive biases," in a flashcard. I would like to preserve the concept and instantiations of the concept structure in a card, which was challenging for me. This member of the Autodidacts group described that he basically would try to use his visual system to, first of all, visualize the answer to the flashcard so he basically was using two modes of processing. But also he described a method for adding structure to these cards that I hadn't heard before. So I went back to my 'Compressing Content on How to Learn' document, updating spaced repetition with this new set of ideas. I basically have made a few heroic attempts to use it in the past, some of which were super helpful — for language learning, it was really, really helpful — but some of which I would say were abject failures, such as some of my textbooks I just really struggled to use spaced repetition effectively for, which now I have some hope about.

SPENCER: What idea that you learned gives you hope that spaced repetition might be usable for these ideas?

JEREMY: It's really the thought that I can integrate my visual system with my standard conceptual memory, where in his cards, including some visual representation of the answer which he expected himself to remember in addition to the textual representation of the answer. So in my mind, say you're trying to remember something about residual maths which are basically going to have a pass-through connection to future layers. You would both try to remember some textual answer but you would also have some visualization which you describe and if you try to remember in the face of seeing the card, would come up with both representations of the response and much more consistently remember what was on his card. If you read any major text about these memory champions, they typically will build expansive palaces of visual concepts and use this synesthetic version of memory in order to improve their memory. There's a very fundamental principle behind the sense that humans are visuospatial reasoners. If you look at our language, metaphors, we live by style re-representations of visual or physical content that makes sense to me. So I guess I believe this fundamental truth about the synesthetic nature of thought and cognition that makes me believe that his idea — 'let us both put a concept and some visualization at the end of the space repetition card' — is likely to succeed.

SPENCER: It's fascinating how much better we are at remembering visual things most of the time. You imagine you've gone someplace physically in space and then you come back there, usually like, "Oh, I've been here before," and you remember, "Oh, I need to take a left." Whereas something like a number, a statistic, people are just atrocious at remembering them. It just doesn't seem like the sort of information that sticks in our brains.

JEREMY: Exactly. Even visual mnemonics feel like this, where you ask someone to come up with some insane experience or some insane event that is really emotionally gripping. You want them to not think about abstraction in the abstract but see it as moving higher. They're supposed to see a fleet of birds or a dragon or some representation of height and tie it to the concept itself and, in so doing, basically let that sequence of visual experiences be their memory representation. There's this thought that emotions are going to be deeply tied to the learning process so one thing that you realize is you both want a very strong motivation to learn but you want the experience of learning moment by moment to be this emotional tapestry. This attempt in mnemonics is a very general principle. It says you're basically going to think associatively. If you can associate something you're trying to learn with something that has emotional valence, then you're much more likely to have it in long-term memory.

SPENCER: What are some other advantages of doing something like The Index? Would you advise other people to do it?

JEREMY: I'm very surprised that people don't have this in practice.

SPENCER: It's funny because it's this thing that's so unique about you and yet it surprises you that other people don't do it.

JEREMY: It seems really obvious to me. I guess at some level, there's a background expectation that you would want to remember what you learn. It shocks me, the experience of college for most people is of being embedded in an incentive structure which they're optimizing against, as opposed to being in a genuine learning experience that they want to follow them for the rest of their life. I think that being serious about the ideas that you learn — if you actually believe that they're impactful and important — means remembering them and means calling them to mind in the moment where it's important to call them to mind, having your tool at hand. In my mind, it feels like you spend your entire life forging these incredibly powerful tools. Most kids are learning physics, learning mathematics, and these are the building blocks of incredible technologies and, frankly speaking, the decisions that are going to create the kind of life that you have. And they promptly forget them and it doesn't bother them somehow that they're forgetting them. I guess that there's insufficient meta memory so that you can't experience your forgetting in a way that is sufficiently visceral. In my mind, the fact that you can't remember that you forgot is the main determining factor for why people don't do things like The Index. If they could see everything that was being lost, there would be a sense that this was a cataclysm, this was a Ragnarok, and that there needed to be some way to hold on to all of the truths that were likely to create your future.

SPENCER: I think that's a really great point that, not only do we forget almost everything we learn, but we are not aware of the fact that we forgot it. It's one of the properties of forgetting. I was experimenting with different techniques to fall asleep faster. But I was having this difficulty of telling which ones worked because when something worked, I would fall asleep. Then I wasn't sure if I'd actually been using the technique the next morning. I think it's really shocking, the degree to which so many people who say they want to learn, think they want to learn, read blog posts and books, and then just forget virtually everything and have no system in place to make sure that doesn't happen.

JEREMY: Often there'll be this vague background sense that, in having read the blog posts or in having read the book, that they have a better intuition now for how to learn or that somewhere in their subconscious experience is valuable information about learning. They think, "Oh, actually, well, next time I learn, I must be improved." But I think that mostly that's untrue. In practice, you actually do have to remember what you learned in order to use it. If you use it, the gains are dramatic. You read about the athletes who employ deliberate practice in comparison to those who don't; it really is dramatic. But if you merely know that, at some point in the past you read a book which mentioned deliberate practice, you're gonna be in a much worse place because you won't actually be able to bring it to bear on your actual learning challenge right now. A lot of these techniques work because you use them. They don't work because you know about them.

SPENCER: I think there's something that might be confusing to people which is that, when someone gets to a real level of expertise, it doesn't seem like they're using the explicit knowledge very often. It seems they're just doing the thing. But often, you need the explicit knowledge as a scaffold to take you from naive intuition to advanced intuition. For example, someone who's an expert at martial arts is probably not thinking about how to do the right punch. They just do the right punch. Whereas someone who's a beginner has to learn explicitly about what the right way to punch is. Then they need to practice it until it gets fluid. They can actually self-critique because they have that cognitive System Two understanding of how to do the right punch. They can critique their own punching during practice.

JEREMY: Absolutely. The Art of Learning by Josh Waitzkin.

SPENCER: I love that book.

JEREMY: The sense that you can achieve mastery systematically is so strong in that text. But he has this idea — he describes the circles within circles — where you will internalize the principle one stage at a time, where he'll describe a six-stage throw, where he practices intensively each stage, the push on the opponent and the way that they create space and the way that their creation of space leads to a sweep by him. He'll describe each stage in intimate detail and describe the way in which his practicing of it internalizes it into his muscle memory, into System One. There's this book on tennis, basically another writer breaking down the way that System One processes can be internalized. You'll have some System Two sense, some high-level conceptual sense of what to do, but then you have to physically instantiate it in a very visceral and intuitive way. In The Inner Game of Tennis, they describe this as well. But basically, the six-stage process is for him broken down into a single stage. While he's capable of distinguishing between it while he's training, the person that he uses the throw on experiences a single motion in a moment where he's experiencing six individual motions and his ability to decompose the experience comes out of this deliberate training. There's a sense that there's an actual awareness of decomposition which has disappeared in his muscle memory where he's no longer physically making this distinction and the person he's fighting is no longer making this distinction. I think that mastery does usually involve internalizing the fundamentals deeply enough that you can chunk it or abstract it into a single object which lets you engage with higher-level patterns but where, if you're an observer of the master, it won't be clear that the thing that's being done can be decomposed in the way that they describe.

SPENCER: This is such a good point. So much of learning is really about this. You think about the first time you ever learn in calculus class about the idea of a derivative, and you're trying to wrap your mind around it. Eventually you get such fluidity with the idea of a derivative that you can then just use it in a sentence. You could say, "Well, when I take the derivative, such and such thing happens." Now you've just encapsulated this really complex idea in just one word. Then you can start building really complex ideas on top of that. Then eventually, those complex ideas start collapsing into just a single word that you can then use to build up even more complex concepts. I think this is so much about how humans think about really complex things. We keep collapsing information into just a single word. Once we really grok that word on a deep level, we can then start building new things out of it.

JEREMY: When you look at The Index, there's a first opener which is a manuscript, the book that I have been putting together on abstraction, which is about information and how it's represented in exactly the way that you describe, often hierarchically. The reality is that, when you're sitting on top of all of this knowledge — take the derivative — you can imagine trying to go through the concept of a limit every single time you wanted to describe this idea of the derivative. Everyone can visualize the set of rectangles that are being aggregated in some integral as you're trying to approximate the area of an object. But basically, by encapsulating that information to a single word and then composing many words that are all encapsulations of lower-level underlying content with each other and to even higher- level abstractions, we are capable of describing incredible amounts of information with a single word. One tragedy about it is it's conflationary so it'll take things that are slightly different in a lot of cases. Take the concept of China, of America, conflating the government with the people, with the economy. In a lot of cases, it's hard to say, "China has decided," as opposed to a particular institution or particular leader within. So there's a conflationary damage that comes out of using language this way. But there's also so much representational power where, in a single sentence, I can describe the sentence as a composition of words, all of those words including huge amounts of information where, every time I say the word 'word,' every single word that exists is under that banner. It is just an incredibly powerful tool to be able to describe transformations on a huge scale with a single bit. I guess a part of me is just astonished at its power and its depth and the way that we can automatically do transfer between different objects by saying, "Okay, actually, here's a pattern that occurs in one context and a pattern that occurs in another context." If I describe both with the same word, or say both are defined by the same word, I can learn about them simultaneously. Everything I learn about one is transferred to the other automatically just via the description.

SPENCER: So cool. I think about this with regard to psychology research. When I'm talking to my colleagues, I often use this metaphor of thinking about the solar system and saying, "If you're really, really far away from the solar system, you can treat the solar system as just one thing. The solar system is like a dot. But then when you start getting close to the solar system, you'd have to start realizing it has different planets and it has a sun and so on." I think about this in psychology research because let's say you're studying personality, maybe you want to think about something like agreeableness as just one thing and that might work for a bunch of contexts. But then as you start getting closer and closer to agreeableness, you have to actually start noticing the fact that it actually is a bunch of things. Agreeableness itself subdivides into a bunch of things and each of those things can further be subdivided and so on. Depending on the context, you've got to be moving down and up these levels of abstraction in order to actually still make sense of the thing you're talking about in order to reason about it properly.

JEREMY: Absolutely love that. The farther that you zoom in on many of these concepts, the more complex they decompose. There's almost a recursive decomposition. You split agreeableness into subsets and you split the subsets into subsets. There's a sense that you ground out somewhere that's useless for your problem. You're trying to pick the level of analysis at which to think, which typically is that which is causally interacting with whatever the outcome you care about is. If I see personality traits — take the big five dimensions: agreeableness, conscientiousness, extraversion, neuroticism, and openness — if I am trying to think about a person, having five traits is really useful because I can ask, "Where are you on these five traits?" I can hold that all in working memory simultaneously. You can see these as abstractions over behaviors or over dispositions where lots of people will have a disagreeableness/agreeableness kind of response. You can aggregate over people saying, "You're similar to all the people who have agreeableness level 54 (negative 50, on a negative 100 to 100 scale)." That's incredibly useful for predicting the behavior of the people in question. You also want to ask, what level of analysis is the optimal level of analysis for my problem? How can I, in moving between these levels, make criticisms of the concept of agreeableness? Or more effectively use the concept of agreeableness, where people will, at a high level, say, "Oh, I don't like their personality." If you try to decompose that, you could actually ask, "Which subset of their personality? What do you mean when you say personality?" This big five decomposition of that concept is itself a decomposition.

SPENCER: Absolutely. For example, by some breakdowns, agreeableness splits into things like trust, altruism, compliance, modesty. When you're actually thinking about a specific person, as opposed to trying to average across many people, you might start asking yourself, "This person is very agreeable. But what do I really mean by that? I mean that they're very altruistic but actually they're not really that modest." Now you can start splitting that apart and getting a more modest view. I also think this comes up when you start looking at the correlation between variables. You're like, "Agreeableness correlates with something else. Let's say, being less good at negotiating." I'm just making that up. But let's suppose you find that, now you can start asking the question, "But which part of agreeableness really is driving that? Is it the compliance part or the modesty part or the altruism part?" It lets you ask the more interesting and nuanced questions.

JEREMY: There's a sense that these recursive decompositions, in some cases, ground out in something clear and in other cases, do not. A number of friends of mine are really big fans of first principles perspectives. One way to go first principles is to do these kinds of recursive decompositions, where the canonical example is Elon starting SpaceX and saying, "What is the price of the nickel in this rocket? What is the price of the iron, of the metals in this rocket?" If you ask what the prices of all of the components of the rocket are, you get a number that's 1.5% of the overall cost of the rocket and from that point ask... Actually all we care about is composing these materials in a very particular configuration. So it's trying to bottom out what the price of this object would be, which comes in part from asking this recursively decompositional question of the physical world. In my mind, this thinking tool is incredibly general. This is a cornerstone of a creativity system that I'm interested in, where you try to ask intentionally, "How is the concept that I'm using flawed or conflationary?" and in so doing, will come up with things like trust as a subset of agreeableness. When you find the decomposition, you can say, "There are lots of traits like these that we could recombine into some different higher-level trait. If you say agreeableness comes out of these four sub-traits, why are they being combined with each other? Is that useful?" I know the big five comes out of eigendecomposition, this principal components analysis over personality question data. There may be some foundational claim there to say, actually agreeableness is importantly made of these sub-components and they shouldn't be divorced from one another. But in a lot of contexts, people aren't that careful about how they design their conceptual scheme. The concepts that they use are, in most cases, how they do their creative work and are how they represent what the thing that they're interacting with is. As soon as you decompose the major concepts in what you're engaging with — I work in machine learning so I think about these concepts like regularization or supervised learning as Borg-like constructs that are huge and incredibly conflationary — you can get a lot out of breaking these things down and seeing clearly exactly what it is that are driving effects that you care about. A lot of the process of normal scientists making these decompositions happen at a lower level than a folk experience of the system would have it happen.

[promo]

SPENCER: I was having an interesting debate with a friend the other day where they were suggesting this idea that things must get more complicated as we aggregate them. I was pointing out that, in fluid dynamics, if you imagine water as just a whole bunch of little tiny molecules bumping into each other, that's absurdly complicated. But if you start forgetting that water is made of molecules and you view it as a continuous infinitely differentiable fluid, it actually simplifies a lot. Then you can actually get further complexity if you start adding things like, what if the water is moving really fast, and so on. It can start really complex and then get simpler again and then get more complex again. The way that you parameterize the thing can actually make it harder or easier to think about.

JEREMY: Yeah, it's incredibly deep. I guess mechanics and quantum mechanics have the exact same properties. People tend to use a word like 'emergence' to describe just how powerful this is where it seems like, out of a massive, incredibly complex behavior, a singular pattern — which can be simply described — emerges. The leverage that this gives you over the complex system is tremendous. I think fluid dynamics and statistical mechanics are wonderful examples of this to the point where it's created a scientific expectation that you do this in the face of complex systems or that this will be possible, where the entire idea is to just try to find some incredible simplification that will describe the system in a way that allows you to control it. There's so many properties that we've struggled to do this with. But I guess if water is at the bottom of your drain, for example, and there's a lot of interference with itself, it's very hard to predict or model exactly what's going to happen. It's still beyond the reach of physics. I do have this sense that, in practice, this has turned into a very deep idea for a lot of scientists who see these consilient experiences as some of the most beautiful reconciliations of something that was utterly complex that can be created.

SPENCER: It's this idea of, "What information can you ignore?" If you ignore the right information, then things become much easier to think about. You can actually answer questions that you couldn't answer until you ignored some of your information.

JEREMY: This is an incredibly useful definition: What can I remove or get rid of? But there are so many different ways to conceive of the same process. You could see it as ignoring a huge number of interactions between the water but you can also ask, "What is the most compressed way to represent this behavior?" There's a sense that compression is this path to simplicity where, if you find some super simple algorithm that describes the workings of the water, you treat it as superior to an algorithm that has to specify the behavior of every single molecule. I think the representation that says 'remove the details' feels like an interestingly different representation than 'compress it,' feels like a different thing than to go to something that is conceptual rather than something that is concrete. A lot of ways you ask, "In physical reality, are the equations that describe these fluid dynamics being implemented? Or do they just happen to describe a process that's much, much more complicated than the high-level conceptual equations that we're using to describe the system?" The sense that you have identified some shared structure between all of these water molecules and, in just figuring out that similarity, used it to model all of it simultaneously, that you've done something powerful. All of these are different perspectives on this exact same process of moving from the details to something higher up.

SPENCER: It seems like the abstraction process has to take into account what we care about or what our goal is. Because for example, one way to compress something is to (let's say) write something that simulates it. Maybe you could have a bunch of code that actually simulates a fluid flowing and that can be really, really useful if you're going to try to predict how it's going to flow and you have a set of initial conditions that you can plug in as the starting point. But let's say you don't have a set of initial conditions, then you're gonna have a hard time using that simulation to accomplish your goal. Maybe in another case, what you really want is not just some code that will actually simulate the thing but you actually want (let's say) an analytic mathematical expression. You like that because it allows you to manipulate it mathematically and maybe learn things about it. In other cases, you might want an asymptotic representation where you take some limit of some variable and say, "When this variable goes to infinity, this is how this thing will behave," or maybe in an equilibrium state or steady state or something like that. Each of these is a different way of compressing information, but they tell you really different things about it.

JEREMY: Certainly approximating asymptotics via simulation is quite useful. You're actually like, "How do these systems behave in the limit?" Simulation is also a really good way to understand the properties of systems. Even if you can't (say) take this particular bathtub and model it. You can model bathtubs in general and also your simulated bathtub. When you're designing something that's going to engage with a system like it, you can design it in a way that will account for the properties of the system even if you don't have the initial conditions. It's really useful to get to those properties even if you don't have the ability to measure things precisely. Often I ask, "Can we use simulation to inform decision-making?" So when something happens in the world — and I want to know what's going to happen in the future — I go to some set of simulations of the future. I consult those simulations to make a prediction about what will happen in actual reality. Because the generators that underlie the simulators are flooded with data from actual reality, the predictions are typically quite accurate. There's just a lot of power in coming up with some generative model of the system where you can create its outcome in sim. It does make you wonder, these are all interestingly different approaches to trying to figure out what's true where, in this case, you have some generalization objective — you want a prediction that's going to generalize across your simulations and in the real world — and you're going to try to use a lot of information from the real world to inform those simulations. In a lot of ways, you can build practical tools that are almost impossible to build if you don't make this assumption that your ability to go from simulation to reality is quite good. In a lot of cases, you can approximate the diff between your simulation and actual reality and account for it and, in so doing, build a really potent predictive system.

SPENCER: You have an example of using that kind of approach?

JEREMY: Yeah. The OpenAI Robotics Challenge — where they had a robot basically take in hand a Rubik's Cube and solve a Rubik's Cube — is a very good example where they built an in-depth simulation of this robotic hand. The simulation was run with many different parameters. In solving the Rubik's Cube in sim, they were able to figure out what parameterization of the movement of the robot's hands would actually move the cube in a way that was consistent with solving the cube. The sim of the real problem is really important because, if you're going to try to collect data on how to move the robot's hand, the process of collecting data in the real world will be incredibly slow. Moving the robotic hand in such and such an orientation takes time, and it takes energy, it takes power. You typically want to do this in parallel. So you'd have to set up hundreds or thousands of robots in order to approximate the amount of data that they were able to collect in simulation. In a lot of ways, this challenge was only solvable because they could go from simulations of the hands movement to the actual hands movement with some corrections.

SPENCER: They basically had to adjust the results of the simulator to take into account some physical reality that wasn't quite captured in simulation?

JEREMY: Yeah, that's right. You can use a model to basically fine-tune. You take the prediction from the simulated outcome, you take some small amount of data from the real world, and you just train a simple model that can do a correction.

SPENCER: Oh, cool. I heard about companies doing self-driving car stuff that were experimenting with using simulations of driving in cities with the idea that it's hard to go video record a million miles of a self-driving car driving around. Maybe you could just have a video game that's quite realistic to real life and you can drive around millions of miles in the video game real easily. I don't know if this ended up actually being useful but I thought it was a cool approach.

JEREMY: I think that, especially depending on the algorithm you want to use, you want to generate as much accurate data as you can. There are all these edge scenarios like in car crashes, where you'd like your self-driving car to behave in a way that's consistent with saving the lives of the people involved without actually having to experience many car crashes in the real world. There's also just a real safety constraint that makes that kind of approach appealing. I guess that practically speaking, Tesla did just collect millions and millions and millions of miles of data. It was quite expensive but they set up a system by which it could be done and made a lot of improvements that were based on real-world data. So as far as I can tell, the mainline self-driving programs haven't been using this approach. But there are a lot of reasons that you would like it to succeed. You know that in your mind or in my mind, we can imagine a car accident and, in our projection or in our generation, experience what it might be like if we didn't step on the brakes or if we didn't swerve to one side or to another and, in simulating the experience, make better decisions in the moment about what to do. There's this theoretical sense in which generative models which understand the trajectories of all the objects in the environment can make accurate predictions about what will happen and act on the basis of that prediction. This model-based learning obviously has a lot of potential. But as far as I can tell, the self-driving companies aren't using that kind of technique.

SPENCER: They may be using some kind of simulation for unit testing. Imagine you have a system that you think works, you can now have a bunch of (quote) "unit tests" where you put it into these simulations and make sure that it behaves as expected. And that way, you know that you didn't break the code for it or something like that.

JEREMY: Yeah, I like that. That sounds really reasonable.

SPENCER: I also think that this kind of thinking is actually something the human brain does a lot where we're thinking about, "Do I want to go to this place for breakfast?" Then you run this little mini-simulation of taking a bite of the sandwich that you usually get there. Then you're like, "Yeah, I do want that." We're constantly doing these little tiny simulations. I've noticed one when something bad is about to happen, like I'm about to take a quick motion that might knock a glass over, my brain just projects this little quick simulation of the glass falling down, and I'm like, "Oh shit. I better not take that motion."

JEREMY: A part of me believes that it's way deeper than that and that almost all of our experience is generative where, whenever I remember something, in a lot of ways, I re-experience what's happening from some internal generation where, yes, it affects decision-making. But even the experience of something like listening to music is accompanied over and over again by my brain's predictions of what I'm about to hear being resolved or not being resolved by having heard it. There's a sense that, if I go back to some memory in the past, I recreate what happened in the past in my own mind and also have a very different experience of that memory from each time that I've generated it. There are many forms of therapy that are built around this reconstructive frame where you assume that most memory is generative and is constructive and is actively creating the past experience in the mind, as opposed to loading up some saved file that has the exact same characteristics as the old file.

SPENCER: That's a really good point. I think that's exactly right. I also think that this idea of generating even occurs in moment-to-moment experience. You can see this if, for example, you ever watched a really creepy movie and then you're walking around in a semi-dark room and your brain will (for a moment) convince you that you just saw someone staring at you through your window or stuff like this, where, actually, the lower quality that our sensory input is — the more noisy it is — the more our brain is actually seeming to use the generative algorithm. That generative algorithm seems to also be partially adjusting for whatever just happened. If we were just watching a scary movie, it's going to have a bunch of scarier stuff in the generation. It's actually reconstructing our experience on the fly.

JEREMY: I think we've all had the experience of fearful thoughts generating other fearful thoughts as well. It's fascinating to watch yourself be in these loops or watch people that you talk to be in these loops, where their experience of their own thoughts is clearly spiraling down or, in the case of fearfulness, where you imagine all sorts of horrible things that can be coming or can be happening. Because what's inside you is a visceral felt experience of fear which is justifying itself through ideas that you're coming up with. The experience of psychosis or of someone who's paranoid is that they'll have an internal generative experience of fearfulness that can come up with an entire range of specific things that they are afraid of but it's coming out of internal experience that's generating it. You can train yourself via scary movies for sure. But I think this training is very general. Honestly, if you decide how you want to feel, you can find the right kind of media to indoctrinate you with that feeling and immerse yourself in it till you see it everywhere and experience it everywhere. In practice, we're held within pretty clean bounds experientially by just watching normal television or going through life normally. But I feel like there's a world where you control your information flows in a way that gives you the constant experience of insight, or of growth, or of compassion, things that you would actually want to experience. Via neuro-associative training, you're basically going to have thoughts that are consistent with whatever patterns of experience you've been seeing in the last two or three weeks or that you've seen intensively over a period. Immersion experiences are very much like this where suddenly, you're having dreams in Chinese because all that you're experiencing is Chinese. I think a lot of learning is taking place through a lot of associative conditioning, which can be controlled intentionally but which, frankly, our education system isn't particularly built around and even our work systems aren't particularly built around.

SPENCER: Do you have a way of setting your information intake to try to produce these experiences?

JEREMY: Absolutely. The obvious thing is, "Who are you gonna spend time around?" But I'd say immersion is one of the most intense paths to learning where, if I can line my life up on multiple axes simultaneously... I'll give you an example of machine learning. When I was learning it in 2015, I wrote an idea list, I set a 10-minute timer and under time constraint asked, "How could I create an immersive experience of machine learning for myself?" I realized that in the evenings, I could go to events where I'd have intense conversations with researchers and with data scientists about what the future of this technology could create and how they were building the systems that they were building, that during the day, I could immerse myself via textbook reading and implementation, and that on the weekends, for fun, I could go to hackathons and basically build interesting side products that came out of it. You do actually — within two or three months of complete immersion — realize that your mental patterns are completely tiled by the set of structures and patterns that you've been seeing in these texts. For me, the statistics is incredibly general. A lot of it was immediately applied to real life for me. Actually, what does it mean to overfit on my experiences? I'd say the way I overfit to my relationships, that most people will overfit to a small number of data points that aren't necessarily representative of a greater whole. That concept would be so present that you couldn't help but notice just how much data you had for a lot of the claims that you're making. If you're learning about statistical power in this immersive detail, you'll ask, "How well-founded are all of my beliefs on the kinds of data that I have access to?" And you start evaluating data for how many data points those data points we're looking at. You read a book and you're like, "Exactly how many data points has this author received? For a relationships book, have they coached 500 couples or five?" These concepts, when you're immersed in them, completely totalize. I think it is a fast path to learning. People find they can only learn language when they move to the country where it's spoken and they speak it all the time. But on some level, most technical learning — whether it's mathematics or programming — is in fact language learning. I think people benefit a ton from these immersive experiences but have no structure with which to make sure that they happen. Honestly, I think a lot of language learning is hamstrung by the absence of these programs.

SPENCER: You say it's language learning because you have to build up all these concepts and then build on top of those. Is that what you mean?

JEREMY: If I go to China, I will learn Mandarin much more quickly than if I am speaking English almost all the time but go to Chinese class for four or five hours a week.

SPENCER: But I mean with (let's say) machine learning, you're saying it's still language learning. Is that just because you're thinking of each of these concepts as part of a language?

JEREMY: They definitely are. I don't know how often you read textbooks. But if you've been reading a textbook for two-and-a-half to five hours for four or five days in a row — let me speak for myself — my language patterns change dramatically, noticeably. Suddenly, optimization is everywhere. I'll see the ways that systems in my worlds are the results of optimization processes that are in a state of equilibria. Equilibria properties are everywhere. You will start to realize what the properties of the data that you're using to inform your beliefs are. Regularization would be everywhere. I would think of more working out as regularization where, in machine learning, there's a sense that you can simplify a model by constraining its weights to be within a certain regime. When I work out, especially if I was sprinting or if I was doing some sort of cardio workout, that would feel like the regularization workout where I was taking my body — which would typically get larger in the face of muscle stress — I was refining the muscles or tampering them down. Earlier, we were talking about synesthetic learning and the way that the visual system would interact with our conceptual representation of knowledge. But I think it's a very similar idea here that, when you're immersed in a conceptual set, you end up tiling the way you experience everything. So I guess earlier this week, I was reading The Road to Reality by Penrose. I just started seeing the geometric patterns across the trees that I see outside, the way that they're cylinders that are recursively branching off one another in triangular forms. The generality of geometry is stunning. You see just how many right angles are everywhere and the way that they recurse on themselves. Or a bookshelf and every book on the bookshelf has right angles and the bookshelf has right angles. So I think, if you do sufficiently deep immersion in some set of ideas, you just start associating it with everything that you experience.

SPENCER: It reminds me of the Tetris effect that some people get while playing video games where, after they've been playing Tetris for too long, they'll start having this sense that every object in their life could be slotted into other objects. They're imagining this building could fit right between those two buildings, and so on. Your brain is so used to doing that thing that it starts trying to do it everywhere. For Tetris, that's probably not very useful. But maybe if it's reading machine learning textbooks, actually, that is really useful because suddenly you're trying to find applications of those ideas in everything you do.

JEREMY: I completely agree. Have you had the Tetris effect?

SPENCER: I have not but I definitely had it when I was a kid playing video games where, when I would close my eyes, I would start seeing generative simulations of the video game but involving (let's say) characters that didn't actually exist in the game. It was like my brain inventing variants on the game but not on purpose. Just that would happen when my eyes closed.

JEREMY: A friend of mine, we played Tetris Friends. This was maybe a decade ago. I was really competitive with this friend across everything. We both had test score competitions — we would both autodidact AP tests to see who could basically get the most fives — I mean all these competitions. He was the best Facebook friend that I had on Tetris Friends, number one in Tetris Sprint and Tetris Marathon. I spent three and a half weeks grinding against Tetris Marathon and Tetris Sprint to surpass his scores. For the last week, every single night, it was Tetris. I'd dream Tetris coming in and dream Tetris coming out. I don't know to what degree my dream or generative experience of Tetris was integrating into the knowledge of exactly where to place a particular block at a particular time and how to set things up so that you can drop to four blocks simultaneously on the side column. I definitely had it hard.

SPENCER: It doesn't seem like a coincidence, right? When you do something intensively that's new, your brain wants to start generating lots and lots of examples of it. It feels to me that this is some kind of self-learning process. What do you think about that?

JEREMY: I think that the brain is basically saying on some level that I am consistently being forced to work on this kind of problem or really need to work well on this kind of problem. Working on these problems activates a pattern and, in consistently activating the pattern, has been rewarded in the past. From my experience, if I can see my Tetris Sprint score go from 2:20 to 2:10 to two minutes flat, you can watch your experience of reward compel your brain to continue to generate the patterns that are experiencing reward. On some level with language learning, if you're deep enough into language learning, you have the first experience of dreaming in the language that you've been learning and your brain subconsciously generating the experience that has been immersive in your life. There's the lucid dreaming framework where you can trigger lucid dreams by having a trigger point to the real world. I would attempt to put my finger through my hand every time I went through a doorway. I think it would clearly go through my hand when I was in the dream but would stop at my hand when I was in a doorway. Waking up in these lucid dreams, you'd always just be seeing patterns from day-to-day life being reinforced as the default dreaming experience. I think a lot of dream interpretation came out of this sense that there were deep connections to what's happening in your life from the visceral emotional challenges to whatever it is that you're learning. So it does seem super general.

SPENCER: I remember seeing a machine learning paper once where they actually generated new training data by using generative models to construct new training data and then plugging it back into the algorithm. You might think that that would just overfit or something. But actually these new training examples they generated that were sort of like the training data, actually improved the algorithm. I thought that was pretty interesting. I don't know if that work has really made much impact or not. But that idea that you can remix what you've seen into things you've never seen, then say "What would I do in these cases that I've never seen?" and that actually might prepare you for new cases in the real world.

JEREMY: The paper's title is "World Models." It's by David Ha and Juergen Schmidhuber. In a lot of ways, Schmidhuber has been very influential in creating a number of algorithms in machine learning but also a number of theories which we can talk about. I would say that this idea where you're going to have a generative model which, in their case, is a recurrent neural network...

SPENCER: Do you wanna define a generative model? We've mentioned that word a few times. But we haven't really said what it is.

JEREMY: There are two definitions that occur in the literature. One is that it just generates the output. Typically, the input to a machine learning model is text or is an image. With a generative model, instead of saying, "This text, its sentiment is positive, so it's okay. It's a happy sentiment. It's a positive review" versus "It's a negative review" might be the outcome. What you want to do is generate the text itself. So say, let's generate a positive review, "Tell me something about this MacBook Pro that is really positive." The generative model is going to write you something that's very interesting.

SPENCER: Instead of saying, "Here's some text, tell me if it's positive." It's like, "Generate some text for me that's positive." So it reverses it.

JEREMY: Exactly.

SPENCER: I also just want to point out that there are a lot of machine learning algorithms that work on other types of data besides text and images, but those are just two examples.

JEREMY: Yeah and that's a really important comment. Honestly, in the face of deep learning, so many ML modalities have been forgotten: gradient-boosted decision trees, all of the spreadsheet data that you'd like to train on. There's so much data in the world that has been neglected. I feel like it's important. But there's another formal definition of generative modeling which focuses on an inversion of Bayes' Rule. Usually, you're gonna try to predict Y from X, as opposed to predicting X from Y. It's trying to build a density model, like build a model of X, of the data itself. Typically, it's hard just because the data is usually much higher-dimensional than the outcome, especially if you're used to working with classification. Images are quite large so trying to generate every single pixel is a much more interesting task in a way than trying to say, from an entire large set of pixels, say, "This is a cat or a dog, one or the other."

SPENCER: Normally, you'd be saying, "What's the probability that this image is a dog? What's the probability it's a cat?" That's a really low-dimensional problem. Whereas if you say, "Condition on it being a dog, what's the probability of the first pixel being this color and the second pixel being that color?" And now you actually have a probability distribution over the entire set of pixels. That's what you're getting at, right?

JEREMY: That's exactly the idea. Often, there will just be thousands and thousands of pixels. The space of all possible dogs is quite large. It's on a dog manifold. It's connected in a way to all the other images of dogs. It's just a very large space, whereas the binary analysis of a dog or a cat is a super simple outcome. At present, we have a lot of self-supervised models which, given some pre-text task like "generate the text of some huge number of documents on the internet," will learn a very high- quality representation of language. Something similar can happen with images where you say, "We're going to try and train a model to generate images imminently," like many images that we see online have been generated. There's a sense that in creating the data, you can do self-supervision more effectively where you (say) generate a supervisory signal by asking whether you've created the thing that was the input. If you can just find a ton of inputs, you find a ton of images, or find a ton of texts, in trying to recreate all of those images or in trying to recreate all of those texts, there's a huge amount that can be learned about the patterns in text or about the patterns in images. This World Models paper which trains a generative model to create the sequence of images that is the game and then uses its own generative model of the sequence of images in the game to make decisions about what to do is, in practice, generating a much larger training set than it has from the actual game and also is capable of taking the general principles that came out of its generative model which is creating the game and generalize them to a decision set for the generative model where, on this generative model, here's the decision set that makes sense. Then you take that decision set back to the actual game instead of your representation of the game and perform more effectively based on your simulation, because the model that you generated of how the game works was accurate. In a lot of cases, if you can build an accurate generative model, you can just generate far more data and, in generating more data, build a more accurate decision set. And then trying to use that decision set to make decisions in the actual game improved. It's just really a beautiful idea to move to a model-based framework. It's been around forever so it's hard to give World Models the credit for all of it. But they did a really good job of demonstrating the technique with a real system, and really creating a great visualization. You can watch the video on the website, David Ha's website, of this agent driving and a game that's being generated by the agent itself.

SPENCER: Reminds me a lot of this technique that you sometimes hear Olympic athletes use where they'll imagine themselves taking the actions that they need to take once they are competing. They'll have this visualization practice. There's also an incredible example of this of one of the best climbers in the world using this to practice climbs he's never done before. So he watches the video, the climb, and then imagines in detail every move. It really looks like he's doing the moves, even if he's just lying on the floor.

JEREMY: I did this. There's a book called Mind Gym which describes this in intimate detail. I used to play sports. It feels like a previous life but, back in college, I was a top ten player in the country in college at Ultimate Frisbee, seventh in Callahan voting, which is sort of the Ultimate Heisman. I got there by using Mind Gym habitually, where the idea is to take all of the highlights of your experience, all of the greatest catches and most successful hucks that you've thrown, and play them to yourself as a visualization before you go to every practice and before you go into every game. You enter your practice in a state of being on the high that you were in when you took some incredibly decisive action and made a decisive play that completely transformed the game. In expecting yourself to play that way, you come to practice with an intensity that other people who are lazing themselves into the experience really just can't meet. There's also the sense that you, in going over the patterns of great plays over and over again in your mind, are training yourself. You love Josh Waitzkin's Art of Learning. In the Art of Learning, he breaks his arm and uses one arm to fight and, in the night, intensely visualizes those exact same fights using his broken arm. When the doctors, after six months of the arm's brokenness, take the cast off his arm, the physical muscles of the arm were intact and three days later, he won the world championships. At some level, the neural patterns that he's using are capable of influencing the actual tissue. I'd say, at some level, there's a very intense story to be told there. If you believe Josh, these visualization experiences are absolutely worthwhile training data and, in a lot of cases, are much better training data than actual local training. I don't know if you've tried these visualization experiments. But after a week of this, your concept of what kind of player you are changes. The way that you conceive of yourself changes.

SPENCER: One of the things that I love about that Josh Waitzkin story, if I recall correctly from the book, is that he'd been practicing fighting with just one arm so long that eventually he learned to actually fight with one arm pretty well. Obviously, he was much less good than with two arms. But then, when he got his cast off, it was almost like he had three arms because he'd been forced to use one of his arms as two arms, and now, "Wow. I'm a three-armed fighter. Oh, my gosh." I thought that was really interesting.

JEREMY: I think it's incredible to add constraints to whatever your process is. Whether you're doing machine learning research, or fighting push hands tai chi, in adding constraints, you're processing, "I can't use this algorithm," or "I refuse to use more than this much compute," or "I have to do it using this method." In adding constraints, when you remove the constraints, you open up all sorts of opportunities to take advantage of the creative ways in which you dealt with your constraint. So Josh, in not having an arm, would learn to counter attacks with a single arm rather than two. You ask, "Why didn't the push hands community develop techniques that allow people to counter attacks with one hand rather than two?" The answer is that they had two hands. So if they don't have a constraint, they're not forced to take the creative action which will open up the space of ways to counter an attack. Suddenly he, who is on the scene with a repertoire that no one else has, there's a very strong sense that he's better off for having been temporarily injured and that, ideally, we would temporarily injure or restrict ourselves in order to succeed. Talking to Sam Altman the other day about the way that companies who go through a bottleneck — like Tesla and SpaceX going through 2008 — or who start off with insufficient funding have cultures that are very efficient and that are actually interested in creative solutions that typically would have been solved with money. There's a way that bloat is actually more damaging. So you would want to constrain the company at first, even if you could give it more capital, just so that it develops the processes of a company that's capable of acting on a small amount of capital.

[promo]

SPENCER: There's also this way that constraints seem critical to being more creative. I find that it's often very hard to think of an idea without some kind of constraint added, and then whatever constraint you add actually very much influences that idea. So if you're trying to think of a business idea — just "OK. Come up with a business idea" — that's really hard. But the more narrow you make it or the more constrained you make it, suddenly the ideas start to flow. Any thoughts on that?

JEREMY: Plenty. I teach workshops in systematizing creativity. One of the most popular uses this principle. For example, if you want to constrain your thinking in business idea creation, you ask, "How would I accomplish this business goal in much less time than I have in mind? Say I have to get a successful product out within a month, what products do I create?" as opposed to giving yourself a year or ten years. You'll realize that along all of the time constraints — if I had to build a product in a day versus in a week versus in a month — you have a very different concept of what kind of product you would create and you start to tile the space of possibilities as you move the constraint up and down. But obviously, time isn't the only resource that exists. You have attentional constraints. You have money constraints. You have constraints to your assumptions into your social network. One way to be systematically creative, is to ask yourself how you would solve the problem you're currently solving under some span of resource constraints. "How would I do this with no money? How would I do this with ten billion? How could I productively actually use ten billion dollars to help solve this problem?" In tiling the space of solutions, you'll get very different solutions which can in some cases be composed with each other in a way that forms a better solution than what your original solution was. The thing you do under intense time constraint might actually be the best thing to do under another time constraint but you didn't think of it until you added the constraint. Or maybe you could do it 100 times and doing that time constraint action 100 times is better than doing an untime-constrained action once. There's a sense that your attention is a fundamental limitation. "How can I attend to this maximal year minimally or with more intensity? What would come out of that?" There's a sense that if you come to a problem and can define the kinds of constraints that create creative solutions, that you can very quickly generate a space of opportunities that you wouldn't have had the idea of without the systematized creativity.

SPENCER: I see these constraints as doing at least two things, probably more than two. One of them is like the input that your generative model uses. You're conditioning on this constraint and then that lets your generative model generate a new thing it wouldn't have generated. But the other is a psychological phenomenon. I think one of the questions I find really powerful is when someone's like, "I wish I could do this but I can't." Just asking, "But what if you had to? What if the world was gonna end? Or what if your family member is gonna die if you didn't do it?" Then suddenly, people come up with all sorts of ideas of how to do it. I guess what I mean when I say psychological, there's some limiting self- belief about your own capability and this can free you from that.

JEREMY: It's actually a complete breath of fresh air on some level, to enter a new conceptual frame, new conceptual set of assumptions. In my mind, most behavior is actually limited by these psychological beliefs. I built this limiting belief extraction system because it just seemed so important to figure out the fears that were in the way of action. One easy example is company starting where, in my mind, the act of starting a company has been turned into this huge psychological barrier. Rather than it being like a job you get hired for, people have to change their concept of themselves in order to do it. There's a sense that, as soon as you get committed to a particular identity, that identity comes with a lot of assumptions about what you can and can't do: a job or a particular relationship or a background — "I'm from this place" — all of these things typically come with all sorts of underlying psychological constraints which, in a lot of ways, are completely disconnected from the actions that you could just take. The moment-by-moment action of writing your book or creating your company have next to nothing to do with the psychological constraints that you've placed on yourself. Finding a way to have the people take the actions absent the psychological constraint is, in my mind, one of the highest leverage kinds of action that I can take. So I'm looking into how to just dramatically reduce the psychological burden to company founding by basically just hiring the CEO, hiring the company and asking, "Can we just turn this into another job that a kid out of college says that they want to get that they don't see as different in some fundamental or identity-based way than going to some other job?"

SPENCER: I think one thing that stands in the way of people doing things is credentialism where people basically say, "I don't have the right credentials to work on this thing or even have an original idea about it." Personally, I'm something like a 99th percentile anti-credentialist. I know this because I actually went and developed a credentials test because I was curious about this and I actually measured myself. You could take it on our website, clearerthinking.org, if you're curious. But basically it's about: to what extent do you believe that someone should have the appropriate credentials for something before they give their own opinion about it or go work in that area?

JEREMY: This feels like it's a source of one of the greatest losses of talent that our society sees where, systematically, people invalidate themselves for tasks that are actually really important based on other people's conception of whether or not they have sufficient credentials to do it. There's plenty of this ethos in the valley. Part of the reason I created Inspired Autodidacts — where the auto is about self-learning — is that the credentialing institutions don't have systems for all sorts of learning that I think are fundamental, that I think are really necessary if you want to make great decisions and if you want to build the kind of institutions which can create the progress that I think is worth creating. There's just an intense frustration that, if someone hasn't created a creativity credential, people feel like they're not allowed to try to systematically master creative processes. The institutional frame says that there has to be a credential for something in order to optimize for it. But practically speaking, what it means is that the identities that people are capable of taking on are those that are demarcated by the credentialing system. So if there's not a job that backs the kind of beliefs that you want to have or if there's not a PhD that backs the kind of beliefs that you want to have, the kind of thing that you're interested in becoming is invalidated. You end up with a massive drop in the exploration of the space of possible identities and lifestyles. I really love this concept of lifestyle design from Tim Ferriss, The Four- Hour Workweek. He describes in some detail the sense that you can psychologically design every aspect of your life or of your identity which is just incredibly freeing as an experience. Comprehensive freedom from limiting beliefs where the credential identity complex gets left behind and a creative set of identities can be picked up. What pathway to creating communities that systematically could do away with credentialing did you have in mind? If you had to solve this for your friend, how do you solve it? Cause I do spend a lot of time basically trying to break my friend out of credentialist frames. The classics: I don't have enough time. I don't have enough money. I need the right connections. I don't know enough. All of these bottlenecks. Often, I think they're just courage bottlenecks. If you could create sufficient conviction or sufficient courage, then there'll be success. But in my mind, ideally, these things wouldn't require courage. They're like the extra brain or just feel obvious or seem obvious in the face of some first principles thinking about what's actually happening.

SPENCER: One thing I want to distinguish between is a credential as evidence. If I know someone has a PhD in physics, I'm gonna way update my belief about how much physics they know, obviously. That's just logical and makes sense. We know a PhD in physics involves studying a lot of physics. But then there's this other question about who's allowed to have an opinion or who's allowed to suggest something new? I think that that's where I see people getting stuck, where they're like, "I don't have a PhD in physics so I can't say anything about physics." But maybe you do have something interesting to add. Now of course, if you have people with PhDs in physics telling you you're wrong, you should take that seriously. You should really consider what they're saying. Maybe they're right. But I see it as this idea that we're only allowed to have opinions on certain topics or only certain people can comment on a thing. That's where I see a lot of these limitations happening. I guess I would just say that, if we look at historical examples, there are a lot of really interesting examples where people had amazing ideas and they weren't necessarily the person with the official title or credential. Even if you think about Einstein, when he was working as a patent clerk and he wasn't the big person you're supposed to listen to, he had his three absolutely mind-blowing papers in physics that year. I just think that people don't realize the extent to which we need these outside ideas. Obviously, you'd need to train yourself. If you know nothing about physics, if you never studied physics, you're probably going to be really shit at coming up with ideas in physics. But if you've intensively studied it, you don't need a physics PhD to have opinions on it.

JEREMY: We actually use the physics PhD as a proxy for the feedback loop that the person has been in with hard physics problems and with other physics researchers. It's interesting to watch the proxy. You say, "Actually, I can't evaluate whether you can do physics or not. But I know that other people who can evaluate whether you can do physics or not have allowed you to graduate with this PhD." The degree to which the signal is connected with an incredibly expensive process is really, really important. In "Intellectuals and Society," Thomas Sowell describes the intelligentsia, which is a set of thinkers who are unconstrained by feedback loops, who claim to have authority over some kind of decision-making but, in practice, haven't gained the skill that the credential is implying. It was very easy under the pandemic to watch the responses of the rationality community (for example) to the pandemic in comparison to that of government officials or even FDA and CDC bureaucrats. Just to watch the dramatic difference in the quality and competence of their thinking come out as if they had been engaged in totally different feedback loops with the kinds of decisions they were interested in. I guess there's a background sense that credentialing processes get disconnected from actual skill when there's not quality feedback, or when there aren't incentives that force that feedback to be good. There's no meta credential that credentials the credentials in a way that's robust enough to force credentialing institutions to guarantee that those feedback loops are high quality. Then there are entire domains that feel utterly disconnected from the possibility of high-quality feedback. You'll read a book like The Black Swan by Taleb, which eviscerates macro economics and so much of social science, describe entire fields of people who are engaged with feedback loops that are inadequate, failing to take on tasks and basically existing in a vague social equilibrium where the memes that succeed are mostly about their virality within a particular culture and have next to nothing to do with the quality of predictions that they make or the quality of decisions that come out of the models of these systems. Part of me experiences just intense frustration at the fact that I can't get through to the actual experience that a person had which is the real determinant of whether they can make high-quality decisions or not, and also to the actual thinking that was involved in the decisions that they made, as opposed to a vague credential that came out of a social equilibrium that I don't trust.

SPENCER: I really like your focus on the feedback loops. Because again, if you have someone who has a PhD in physics and then just another random person, of course you're gonna believe the person with a PhD in physics. But let's say you had heard that that random person spent four hours a day reading physics textbooks and using your procedure of 'read a paragraph, then try to write it back in your own words, then compare it to what was there' — they were doing that four hours a day for a few years — that person probably knows a hell of a lot of physics. Again, it's really about the feedback loop. That is probably a better feedback loop than your typical class in a physics PhD. If you were able to actually run that for long enough, you could be competitive. Maybe you could even be better than the average physics PhD. Of course, very, very few people in the world will do that. I just wanted to add something which this conversation keeps making me think about, which is, we have this idea of becoming an elite athlete. You hear about elite athletes and the insane training programs they go through. One day they're training in this skill, the next day they're training in that skill, and they're lifting weights to balance it out. They're doing visualizations to imagine themselves doing the skills. But I feel like we don't really have this idea with regard to thinking. We have very, very few people in society who think about thinking the way an elite athlete thinks about training the body or with anywhere close to the same intensity. The closest thing might be things like spelling bees, which end up seeming almost like a parody of thinking, where you're just memorizing weird, arbitrary spellings of things. But one thing I really get struck with talking to you is how you're sort of like an elite athlete but in the thinking domain and I love that about you.

JEREMY: Aww. I couldn't agree more that intellectual training isn't done. You had a recent post. I think I commented with the set of mental models that I think are most important which I trained with for 15 minutes a day every day in application on some real important problem in my life. I guess it's shocking to me that people don't take thinking seriously when it's determining their decisions and really determining their outcomes. I work in research and many of the researchers I talk to who I ask about how they improve, will just say, "I just try to think harder." There's no real metacognition or reflection on what the process of thinking harder might actually look like. Practically speaking, academia has almost abandoned the problem of creating a phenomenology of thought that makes systematically improving it possible, abandoned to philosophers who are doing nothing with it. I guess that I experience intense frustration at exactly the pattern you're describing. Athletes are getting advice from everywhere. But the books that I care about — like Your Brain at Work, the Art of Learning, Ultralearning, or Deep Work — are mostly neglected. Deep Work as a concept is new and interesting, which shocks me. It should be obvious. Of course, you need to engage in incredibly deep and intense mental action in order to perform at a high level. I have this huge subset of my bookshelf which is basically about intellectual performance. Mind Gym is in there. The books we've been discussing are in there. They have the manual for training thinking. A lot of it is incredibly useful. It's been really useful to me and to others for high-quality decision making, for planning, for learning more effectively. In my mind, I think that the tools are out there; there just aren't programs for basically training thinking. College says they'll teach you how to think. If you ask anyone to define what it means to think well, they'll say something like, "Being critical." It's really vague and, in my mind, an incredibly consequential question to have a great answer to, where ideally thinking would decompose around 10 to 15 axes and you'd figure out how to optimize those 15 axes, and say, "Here's how to improve memory. Here's how to improve creativity. Here's how we're going to extend your ability to attend. Here's how we're going to improve perception. Here's how framing and assumptions and reframing is going to be dealt with. Here's how we're going to teach you to abstract." In practice, I ended up having to do all of this work myself in some sense because nobody laid it out. It shocks me that it's not laid out. The fact that institutions don't do it, in my mind, is a function of the lack of feedback loops for educational institutions to actually improve the quality of the thinking of the students at those institutions, as opposed to, get them jobs, or get them into a state of high status, or get them to contribute back to the institutions, I guess. I have plenty of credentials. I went to Harvard. I joined this elite team at Google called Google Brain. But basically, these credentialist institutions aren't what taught me to think well so I have a real frustration with them.

SPENCER: This makes me think about how, in PhD programs, generally speaking, you'll have no classes on how to come up with new ideas. I did my math PhD and we had tons of classes where we learned different types of math, different theorems, a lot of practice assignments, but no classes on how do you develop new ideas in math. It's mysterious that there are no classes on that. I suspect partly what's going on there is, when it comes to thinking skills, you get a similar thing. You get many classes on object-level questions like chemistry — how does chemistry work — but none on how to think about chemistry, or how to think about ideas in general. I think part of it is that we view some thinking skills as almost like magic. Coming up with new ideas is like this magic thing. There's nothing to say about it or everyone has their own way of doing it. Also, I think people just seem to struggle to come up with really concrete lessons to teach. But it seems like you actually have a lot of ideas for what concrete lessons might be.

JEREMY: If you want to take the genesis of mathematical creation, Henri Poincare wrote an incredible reflection on mathematical creation which describes a set of techniques by which you internalize an intuition for a problem, the way that you let the problem sit on your subconscious, the way that your subconscious, in processing these problems, has insight early in the morning or in some hypnagogic state where you're between sleep and not sleep. It seems like Poincare basically queried 100 of the greatest mathematicians at the time, trying to understand what happened, practically speaking, in their minds as they were engaged in mathematical creation. As far as I can tell, there aren't graduate programs that make this a point of focus for students whose main job will be to attempt to create new theorems and attempt to delineate new domains in mathematics. A part of this feels in a sense like it is magic, as you say. We can't interrogate the thinking process as if it was also a system that could be understood. It's tied to the sense that it's subjective. If you talk to people about consciousness, they'll describe all sorts of theories where there are not high-quality feedback loops so scientists have a sense that the subjective domain can't be attacked. But in my mind, I would say, you can see the outcomes of a person's thinking. I recently put together these five pillars of research experiments, processes, and systemization, one of which is about superforecasting in research, where you ask, "Who can make accurate predictions about what research projects are likely to succeed and likely to fail?" Then as soon as you have that data, you can ask, "Who's capable of generating research ideas which, under these superforecasters' predictions, do very well? And does that in fact happen when you actually execute on the ideas?" Because Tetlock in Superforecasting discovered that there are really dramatic differences between forecasters' ability to predict the outcomes of events. You could imagine researchers doing reference class comparisons between research ideas, or doing triage on them or trying to break seemingly intractable research questions into tractable subproblems which they could make predictive gains on, and in decomposing this process, try to say something concrete about whose thinking was good and whose thinking was not, about whose predictions came true and whose predictions did not. It's really the creation of an epistemology or a path to getting to truth that allows the kinds of criticism that I would levy to be made super explicit. If I say, "Someone's thinking is strewn with hindsight bias or strewn with the narrative fallacy," as a criticism, you could say, "Actually, who thinks like this and how well do their predictions fare against others who do not think like this?" Because Tetlock discovers that fox-like thinking outperforms hedgehog-like thinking. And maybe hedgehog-like thinking is popular in research in part because there aren't these feedback loops.

SPENCER: Do you wanna describe fox and hedgehog thinking?

JEREMY: The thought, really simply speaking is, the fox will put many different models together, will try to say, (technically, heuristics and biases, literature and economics and decision-making) "I believe in all of these different models, and I'm going to use those to make a prediction by basically composing them with each other, realizing which ones are active or not active in a given scenario." The fox believes many, many things, is what they say, and will use those many, many things to make a prediction while the hedgehog is someone who believes one thing. Often ideologues are like this. I think Rene Girard in his mimetic theory says there's one thing that predicts everything else. A lot of social scientists fall into this regime where they'll have one big idea. It's hard to write an entire book about an idea and not start to really see it everywhere. Earlier, we talked about seeing something everywhere. But the hedgehog is really someone who has one incredibly general, incredibly deep belief that they use to make their predictions. I think the main criticism that you can make of Nicolas Nassim Taleb's work in "Antifragile," is that he starts to basically believe that his model is the only model and that anything antifragile should be predicted as going to go well and everything that's not is predicted to go poorly, as if it's the only element that should be considered in the prediction.

SPENCER: There are a lot of things that actually follow normal distributions. That's actually really common, too.

JEREMY: Exactly. Mediocristan, as he calls it. It's the sense that normal distributions are very common and often are the right model to use, which is why they're so prevalent. You do actually get lots of feedback saying that the model that you're using is correct, even if you're in a situation where there will be extreme events in the future. He happens to be in a domain where his insight is correct. But when he moves out of domain, he actually enters a world where normal distributions are the right way to think about things, but will still be acting as if they are as deceptive as they are in finance. The fact that the equations that quants and finance are using happen to make these assumptions which are incorrect, it doesn't mean that every single domain is gonna be like that. I think it's really important to notice that the insight is conditional. You can't actually be a hedgehog who knows one thing. You have to check, "Am I in the conditions where my model is actually going to work?" If those conditions hold, by all means, use your idea. But the fox realizes that these conditionals have to happen. And the fox has models for when the condition holds and different models for when it doesn't hold and actually applies those different models in those different scenarios. The thought is, these kinds of ideas aren't prevalent in research. A lot of researchers are hedgehogs. They'll have one big idea they'll believe very deeply in, and they'll go with it in the long run. I think at the collective level, this is actually good, even if it's sacrificial. Many researchers get sacrificed. They all believe in one thing; the thing is wrong. But the one researcher who believes in the one thing that turns out to be right gets treated as a god, gets treated as (quote, unquote) "godfather of the field" and is rewarded handsomely in the power law way for having believed deeply in something and ignored everything else. There's a question from context to context: which kind of research environment or ecosystem do you want to encourage or do you want to be in? Because that's also probably a conditional question. Depends on what you want out of it.

SPENCER: Jeremy, before we wrap up, I just wanted to ask you about Dei Genesis. What is that? Tell me some of your thoughts.

JEREMY: Concretely, the question of why religion didn't adapt quickly to the new epistemologies like scientific thought and new paths to acquiring truth, always surprised me. There was a sense that the future of religion was going to be tiled with non-ideological religions that were willing to evoke all of the emotionality of a religious experience but also would try to impose some dogma on their believers. Just how essential is the dogma? Personally it didn't feel essential because I had unseen powerful experiences that definitely were legitimate despite not basically being tribal. I was anti-ideological but realized that there are a number of goals that could be conceived of as god-like, the properties that you would give to a god: things like immortality, omniscience (becoming all-knowing), things like transcendence (seeing our reality from the outside as opposed to from the inside), omnipotence. In my mind, for all of these, longevity and life extension is a path towards immortality which ideally would be in the Overton window. You'd ask, "We can extend lives but how can we extend them indefinitely?" This should be easy to describe as a worthwhile goal. The sense that with omniscience, all-knowing, there's a question of what the 'all' is, is that all human knowledge? In one case, you'd say, "Actually with Google search, I can look for other humans who have written anything." There's this extractive sense that if I can search in all of the documents effectively, that I can answer any question and, in the face of a brain-computer interface, may be able to very quickly get access to all knowledge that humanity has managed to collate on the internet or in any informational format, as stepping stones on the path to achieving some ultimate goal which would be conceived of as having all knowledge and making all of that knowledge very practical and useful and accessible.

SPENCER: To make sure I understand what we're talking about here, so this idea of Dei Genesis is that these properties typically only described by religion — this idea of being immortal, all-knowing, all-powerful, and so on — this is about the ways that humanity is incrementally or asymptotically trying to approach these limits with technology. Is that right?

JEREMY: Yeah. I guess I'd say science — certainly physics and the progress that it's made — is definitely under the banner. I'd say science and technology, practically speaking, give us a lot of answers to very deep philosophical questions and, theologically speaking, give us the powers that we, in the past, would have ascribed to gods. The reason that we ascribe them to gods — whether it was a polytheistic or monotheistic religion — was typically that the properties were really incredible, things like being able to live for an incredibly long time or indefinitely are properties that people would ascribe to gods because they would think, "Actually, this is both out of my reach but also is incredibly worthwhile, would really be worth having." The product of basically closing down the Overton window and saying, "Actually, these projects no longer need to be held in a different realm. We can take them on as a to-do list," is a big part of the goal orientation of this Dei Genesis frame. The decomposition of Dei Genesis is god genesis or god creation, and asking, "How can we basically accomplish all of the major properties that gods tended to have?" not because gods had them but because the generator of attributing them to gods was that these were the highest properties that were worth having that we could conceive of. As we conceive of higher and higher goals, we will continue to attribute it to entities which we couldn't touch or which we couldn't be a part of, but they're obviously incredibly important and useful to the way that we can accomplish them.

SPENCER: I feel like some people would say these are not desirable, that there's something like hubris or messing with the natural that should make us afraid of even trying to approach these things. What would you say to that?

JEREMY: Why do you think that people say that?

SPENCER: Well, I definitely know that people say that because I hear it. That's one way to answer your question. Why do I think that people say it? I think sometimes it might be that people rationalize things that they feel are bad that they can't control. If you know that you're going to die or you believe you're definitely gonna die, you might want to try to find some positive way of spinning death. That may be a part of it. But I think also people have this sense that there's certain things that just shouldn't be meddled with, and maybe certain things are sacred. Maybe this seems sacred to some people. The idea of life and death may seem sacred or the idea of omniscience might seem sacred.

JEREMY: Yeah, it's interesting. I guess, my experience is that there is a lot of training, practically speaking, that happens and that it would be very easy to invert the experience of sacredness in somebody who is trained differently than most people tend to be trained. Specifically take life, you can imagine training someone to believe that there's a sanctity to life and that life should never be violated. It was fundamentally valuable to believe in human rights, to say everybody has a right to life. It's actually interesting that a lot of that training does occur. But then the exact opposite training occurs when you try to conceive of that life continuing, going beyond what we conceive of as being (quote, unquote) "natural bounds on the life" even though there's a bevy of incredibly unnatural actions that people take in order to extend and preserve life and to save the lives of children with medicine. I think that the moral and ethical thinking around this is mostly informed by training. The thought is, if you realize that there's a process that has generated your belief, you should ask, "If I could generate a new process, what would it be, and would it be useful, and would it serve me, given that the reason that I had the belief in the first place may not be served by the belief?" In my mind, in the face of someone dying, it's really useful to create a sense that what's happened to them has actually been good somehow. They have moved to a higher plane, to a better place. In the face of intense grief and suffering, there's a lot of comfort to be had in that sense. Even if it's not true that the person has moved on to a higher plane, perhaps it's useful to experience their death as having been, in some way, fulfilling or they had a full life, there's no reason for them to have wanted more. There's no reason to want more and no reason to have more. I think all these forms of acceptance are incredibly useful coping strategies that are going to be necessary until it's possible to actually solve the problem. If it's impossible to solve the problem, then you should absolutely hold on to the strategies that allow you to cope with the fact that the problem hasn't been solved. But I think that we are also ignoring the reality that we're beginning to be able to solve the problem. We're throwing attempts to solve the problem under the bus unnecessarily, when we could be turning them into the highest missions of our society.

SPENCER: What would that look like?

JEREMY: Kids who grew up with ambitions, who wanted to achieve what I'd say is a concrete abstract — typically, kids will say, "I want to be an astronaut," (that was me) or "I want to be president" — typically want to reify something that everyone around them believes is really worthwhile. In my mind, ideally, you would have kids who grew up deciding to go into biology — whether it's genomics or in antibiology — trying to actually solve the problem of death itself, trying to say, "I don't want my grandparents to die or my parents to die. If I can work on a program with a suite of scientists who are attempting to repair arbitrary human tissue, then I will have achieved the level of status of an astronaut or of a president. The people around me will love me for having done it." Practically speaking, we use these status orientations to make decisions about who to value or not value. Then people, realizing the way that our civilization's status is oriented, decide what to do. In so many ways, our art becomes self-fulfilling because it turns people into heroes or villains. In a lot of ways, I could very easily see a world where these goals became the things that people aspire to and the things that people respected others the most for. Because frankly speaking, everyone you know and love is going to die. It really is only because we don't believe we have a chance of stopping that, that it becomes okay that we spend our time doing all sorts of things that have no relation to extending their lifespan. I think the possibilities are actually, in a lot of ways, constrained by our belief that the outcome isn't possible. As soon as the outcome starts to be achieved or becomes possible, you will see a reorientation of the way the status is divvied up and also of the beliefs that people have about what's possible.

SPENCER: It is fascinating how little research goes into extending longevity especially given that age is such a strong predictor of disease. If we could slow down aging, that would be fighting all diseases simultaneously, or most of them, I should say. And yet really a tiny, tiny, tiny fraction of medicine is about slowing down aging. A few pioneers have tried to push it and they get a lot of pushback from people saying they shouldn't be working on that.

JEREMY: They pretend not to be working on what they're working on because the social reaction to their goals is informed in a lot of ways by a religious history and an ethical history that says that life has to end. In a lot of ways, we live in a death cult.

SPENCER: And realistically we're not going to make humans literally immortal. It's more about having more years, hopefully more decades, to enjoy life.

JEREMY: I think so. At least that seems like what is possible to get into the Overton window. Practically speaking, the ability to choose when you die would be much nicer. I say, actually 200 to 300 years for me perhaps. Maybe there's no point at which I actually decide to get off. But then there's an interesting concept of identity in the continuity of experience. When I say, "I live forever," I'm usually referring to my body's continuity. There's a sense that my conception of myself or of what I am can become divorced from my body. Some people really do feel like they identify with their thoughts more than they identify with their physical body, for example. The concept is incredibly conflationary. It's useful, of course, to say every part of your body and every part of your thoughts are the things that constitute you and, over time, they're consistent with each other, so it makes sense. It's very predictive to use a concept of "I" that works that way. But it's not at all clear to me that these things can't be usefully decomposed in a way that makes identity much more general. I think that the way we can see over these things will definitely change as we create technologies that question the way that our conceptual scheme is currently set up. Because that decomposition only makes sense when you can't experience things that other people experience. If you have a brain-computer interface and you can message another person and you can also — as they perceive, say they have some vision experience — if you can project that vision experience onto yourself, it's not going to be totally clear to you what the barrier is between your perception and their perception. The face of that lack of clarity, standard notions of identity, aren't as useful as they were before. It starts becoming a confused way to represent yourself, to say, "Actually, I am just the continuity of my body," when suddenly your perception, which feels like a very fundamental part of you, is integrated with other people or even with a network of people. I think that a lot of these concepts, from a pragmatic perspective, become less useful in the face of improvements in science and technology. So when you say you will live forever, that will also change as the way that we conceive of ourselves changes.

SPENCER: There are topics like death and also things around animal rights, where it seems like when someone brings them up, people have this discomfort with the idea immediately. For example, they have discomfort with the idea that the animals they eat might be suffering a great deal before they eat them. It seems like a natural thing that we want to alleviate that discomfort we feel. Both with death and with animal rights, I just notice people trying to get back a response as fast as possible that relieves that discomfort. People immediately rationalize the fact that we die or immediately rationalize why it's okay to eat animals. I just find that a fascinating psychological phenomenon. I expect — as you're saying — as technology improves and it becomes easier to live longer and longer, it may feel less necessary to do this kind of rationalization. But also, same with animals, as it becomes easier and easier to not harm animals in eating them — for example, through lab-grown meat or clean meat — people might actually be able to accept more that it is probably bad to keep a chicken in a tiny cage its entire life. The technology actually opens up the psychological barriers. Though I'm just curious, what would you say to someone whose knee jerk reaction is, "Death is actually a good thing"?

JEREMY: My response to the animal suffering thing is definitely relevant to my response to this person. With the animal suffering question, there's a distance from the problem because of the way that the system abstracts the experience of suffering away from you where, if the cow was in front of you and you personally were killing it, if you're eating dog and you could see or hear the squeals as you suffocate it or kill it.

SPENCER: The first time and the second time would probably be incredibly disturbing. The 100th time, it would now be normalized again, right?

JEREMY: Yeah, exactly. Hunting did normalize this. We used to do it this way. I think that there's an interesting sense that our ethics is incredibly flexible. Practically speaking, you yourself will have totally different experiences of killing the first and second times than of theorizing about it in the abstract, even though you're making the same decisions about whether or not to eat it. Your moral instinct will shift in the face of your behavior. There's a consistency principle in Cialdini. Cialdini wrote a book called "Influence" about this principle where, as soon as people are behaving in a way that's inconsistent with a moral principle of theirs, they get to choose whether to conceive of themselves as being a bad person or whether to conceive of the moral principle as being irrelevant or uninteresting. Practically speaking, these just don't seem very robust to me. Their lack of robustness is adaptive. People want to be able to switch between countries or switch between moral regimes. Practically speaking, the generator of these disgust reflexes or these values are arbitrary memes that have been moving through our population as opposed to some ordained truth. I really feel that the person who has a knee-jerk reaction against dealing with the aging problem is actually broken in some way. They're actually just reflective of a pretty adaptive movement of ethics, in a game theoretic sense that ethics is about cooperation. You want to be able to hopefully cooperate with members of your religion, with members of your community, and also be predictable to those people. It's just incredibly useful for a lot of reasons to hold that position. The thought is, if you can build a community where holding the opposite position is stable... I guess there's this conflict of the evolutionarily stable strategy in algorithmic game theory, where a number of people who have a different strategy can (when they enter a population) change that population's overall strategy because the way that they cooperate with one another is much more effective than the way that others cooperate. So it sees those as very practical terms that actually what you need is a community of practice where people are systematically working on the problem and, in making progress, end up changing the beliefs of the collective because it becomes no longer useful to them to believe it.

SPENCER: Jeremy, this was super interesting. Thanks so much for coming on.

JEREMY: Absolute joy, Spencer!

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: