Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
October 9, 2025
What does it mean to treat facts as drafts rather than monuments? If truth is something we approach, how do we act while it’s still provisional? When definitions shift, what really changes? How do better instruments quietly rewrite the world we think we know? Are we mostly refining truths or replacing them? When do scientific metaphors clarify and when do they mislead? What public stories make self-correction legible and trusted? What features make science self-correct rather than self-congratulatory? How should we reward replication, repair, and tool-building? Do we need more generalists - or better bridges between tribes? How does measurement expand the very questions we can ask? Is progress a goal-seeking march or a search for interesting stepping stones? Should we teach computing as a liberal art to widen its aims? Will AI turn software into a home-cooked meal for everyone? How do we design tools that increase wonder, not just efficiency?
Samuel Arbesman is Scientist in Residence at Lux Capital. He is also an xLab senior fellow at Case Western Reserve University’s Weatherhead School of Management and a research fellow at the Long Now Foundation. His writing has appeared in the New York Times, the Wall Street Journal, and The Atlantic, and he was previously a contributing writer for Wired. He is the author of the new book The Magic of Code, and his previous books are Overcomplicated: Technology at the Limits of Comprehension and The Half-Life of Facts: Why Everything We Know Has an Expiration Date. He holds a PhD in computational biology from Cornell University and lives in Cleveland with his family.
Links:
Sam's Recent Titles: The Half-Life of Facts and The Magic of Code
SPENCER: Sam, welcome.
SAMUEL: Thank you so much. Great to be chatting with you.
SPENCER: Great to have you here. Do facts have a half-life?
SAMUEL: I would have to say yes, given the title of the book that I wrote, called The Half-Life of Facts. The way to think about it is, obviously what we learn in our textbooks might no longer be true, and as we do science, we learn new things, and sometimes things become obsolete or get overturned. But it turns out, underneath all of that tumult and constant shifting of what we know over time, there are regularities. The reason I use the term half-life, and other people do this as well, is by analogy with radioactive materials. If I were to give you just a single atom of uranium or something like that, you would not be able to tell me when that atom is going to decay. It could be in the next fraction of a second, or we might have to wait millions of years, but everything changes. When we take a whole bunch of atoms and put them together, we get an entire chunk. Then suddenly things actually become regular and systematic, and the same kind of thing can be true with scientific knowledge. There are regularities to how what we know grows, changes, and becomes overturned over time, and how we root errors and things like that. So in that sense, yes, there are these kinds of regularities to how facts change.
SPENCER: Were you able to get an estimate for the actual half-life of certain types of facts, like how many years we have to wait until half of them are considered no longer true?
SAMUEL: Different people have done different analyses for this kind of thing. The clearest one was someone looking at facts in the hepatitis and cirrhosis literature related to the liver. They took a whole bunch of papers and gave them to a series of experts, and they said, "Which of these are true based on the abstract?" and then "Which ones have been overturned or otherwise rendered obsolete?" They were actually able to create a chart and maybe it's still somewhat metaphorical, but from that, they actually said it took about 45 years for half of these things to become obsolete or otherwise not true. There are other ways of measuring it. You can also say, "Oh, when do certain scientific papers in certain fields only get a certain number of citations over time, or whatever it is." There are regularities, whether or not the half-life is always being used more or less in the same way you would think about radioactive materials. That's a whole separate thing, but there are people who have looked at how different fields have actually changed over time. It could even just be how long it takes for the literature to double in size in different fields. Different fields have these different half-lives, similar to the way different radioactive materials have half-lives as well.
SPENCER: If I think about how I would model facts in science, I would imagine that some of them are going to get overturned with a kind of fixed probability of being overturned each year. Maybe any particular fact might have a 1% chance that we discover it's wrong. But then, if you were to draw that process out, there'd be some stable set that is never going to change because they're actually totally correct. And so no matter how many years out, they're just kind of a fixed set. Is that how you think of it?
SAMUEL: I would say yes, but with a voice of caveats. In general, I think the closer we get to the core of what we know, the less likely those kinds of things are going to be overturned. When you look at things that are at the frontier of knowledge, that's where we know the least. It also happens to be where the most exciting things are happening, but that's where a lot of that turmoil happens. That being said, there are situations where things that we sometimes think are in that core can be overturned. For example, my grandfather, who is a dentist, learned the wrong number of human chromosomes, which is kind of crazy. It turns out there was this period in the 1930s to the 1950s where we had microscopes and various visualization techniques that were good enough to see chromosomes, but apparently not quite good enough to actually count them accurately. So he learned that within human cells, there were 48 chromosomes instead of 46 until there was a better technique, and then it was actually rooted out. There are situations where those kinds of things can happen, but I would say, by and large, when it comes to things that seem more certain or closer to the core, they are less likely to be overturned. Overall, we need to keep in mind that science and scientific knowledge ultimately is in draft form. I was talking to a professor of mine from graduate school, and he told me this story where he came into class to lecture, I think on a Tuesday, and he taught some topics. The next day, he actually wrote a paper that overturned everything he had taught the day before. Then he went back into the class on Thursday, whenever he was lecturing, and he said, "Remember what I taught you? It's wrong. If that bothers you, you need to get out of science." There's this idea that science is not necessarily just this body of knowledge. It's really much more about a rigorous means of querying the world, and sometimes, because everything is in draft form, we are going to learn lots of new things that invalidate what we thought was true before. Oftentimes, it's much more at the frontier where things are changing a lot more, but occasionally it can be a little bit more in the core.
SPENCER: It seems to me that there are quite different types of knowledge being overturned. As an example, there's a famous quote that says something like, "One theory is that the Earth is a sphere, another theory is that the Earth is flat. They're both incorrect, but they're incorrect in very different degrees." The Earth is much closer to being a sphere than to being flat, whereas, in fact, it's not a perfect sphere. So that's kind of one kind of overturning the facts; we refine the fact. We say, "Oh, it's not actually a sphere. It's oblong. It has these slight irregularities." But you're still saying, "Okay, but it's not exactly overturning it. It's sort of more revising it."
SAMUEL: Yeah. The quote you're referring to is from Isaac Asimov. Someone wrote to him saying, "We used to think the Earth was flat, then we thought it was a perfect sphere, and now we know it's an oblate spheroid." Asimov responded, "If you think that the two statements are equally wrong, then your view is wronger than both of them put together." Asimov even looked at the amount of error for each of these. Even though they are qualitatively different mental models of the world in terms of how we would use them practically, they actually have errors relative to the true shape of the Earth. He shows that we are, in fact, getting closer and closer. I think that is the right way to think about these things. Just because knowledge is being overturned or we're learning new things about the world doesn't mean that everything is subject to being overturned at any one moment. Therefore, we're not living on shifting sands of knowledge. It's much more about asymptotically approaching the truth. As we learn, we are going to overturn things, but presumably this is all part of the process of getting closer and closer to a true understanding of the nature of the cosmos or whatever specific thing you're trying to understand.
SPENCER: It seems to me there's another way that a fact can change where it's actually due to shifting definitions. An example of this is you might say, "What is the rate of autism in the population?" The answer today is very different than it was 50 years ago. My best understanding of this situation, and talking to experts, is that largely that's due to redefinitions of what autism is, plus improved screening. If we take just the redefinition piece, sometimes the words just don't mean what they used to mean.
SAMUEL: Yes, and I can't speak to that example, but I would say certainly, we do redefine things over time. For example, when Pluto got demoted, it wasn't as if suddenly Pluto no longer existed in the solar system. We discovered new objects in our solar system that were on the same order of size as Pluto, well beyond where Pluto was. We realized, "Oh, wait, there's actually this whole category of objects out there," and in that case, we kind of redefined what Pluto was. But it didn't change what Pluto was.
SPENCER: Oddly, it wasn't even really information about Pluto itself, as far as I understand it.
SAMUEL: It was information about other objects in the solar system. But at the same time, I think this is actually one of the things you're pointing to, is that oftentimes, better measurement techniques or the way in which we define things or measure the world around us will actually lead us to differing understandings of the world. For example, as we've gotten better at measuring the heights of mountains, we've realized we now have a better understanding of the height of Mount Everest, but we also know that the height and the location of Mount Everest is actually shifting a little bit. One of the things I think a lot about is the way in which measurement techniques go hand in hand with learning new things about the world, which is also very interesting because often measurement techniques are bound up in technological advancements. For example, if you have better techniques for smashing atoms together in particle accelerators, you're going to learn new things about the world, and those better particle accelerators and the advances go hand in hand with technological advances. There's this very interesting relationship between the scientific knowledge we have and technological advancement, and how these things are related in terms of what we think about the world and trying to understand it. But you're right. When I talk about knowledge changing, there are a lot of different categories that I kind of collapse. One of the reasons I collapse them all is because it often has to do with how we perceive the knowledge. For example, one fact that is changing is just the number of billions of people on the planet, and that's changing because we are reproducing, and people are having babies faster than people are dying. The globe is actually growing in terms of its global population. The interesting thing there, though, is when we're young, we learn a lot about different knowledge and different topics, geography, and different areas of science. As we get older, we kind of specialize, and the information in our heads often gets frozen when we were younger, when we were first learning those kinds of things, independent of whether or not billions of people on the planet are changing or whether or not we now have better understandings of dinosaurs.
SPENCER: Yeah, dinosaurs are classic examples.
SAMUEL: Totally. When I was young, I learned about dinosaurs as these gray, green reptilian monsters, and now they're fearsome chickens. You don't realize that information is changing until your kid comes home and says, "Guess what? Dinosaurs have feathers and look totally different with bright colors. I think the same thing is true with facts that we would assume are changing, not in terms of the state of the world, but in this case, the state of the world, like the number of billions of people. Someone told me a story at some dinner — I think he was a hedge fund manager or something like that — "Of course, that makes sense because there are only four or five billion people on the planet, and that hasn't been true for quite some time." People often are just stuck in the earlier information they learn, and it often proceeds almost generationally until you're confronted by the new generation that's learning all the new information, and then you have to revise and rethink how these things teach you about the world.
SPENCER: Some facts, we kind of know when we learn them, that they're temporary. "What's the temperature today?" We know that's going to change. But something like the population of the world changes slowly enough that your brain kinda, "Oh I kinda remember that without needing to update it." Is this related to what you call, I think it's a mezzo fact.
SAMUEL: So this is exactly that. There are lots of different categories of knowledge, based on the speed at which they change. You have things that change really rapidly, like what the weather's going to be tomorrow or what the stock market closed at. Those are the very rapid things, and we're pretty good at recognizing those things are going to change. Then you have the other extreme, I don't know, the number of fingers on the human hand or the number of continents on the planet. We don't really have to worry about them shifting. But then there's this whole category of knowledge of things that are changing on the order of decades or on the order of a human lifetime. Those are the mezzo scale. These are the mezzo facts, and those are the hardest ones to update because we often learn them alongside all the things that really never change, and then we forget to update them, which becomes very difficult to grapple with.
SPENCER: There's one more category of fact that can change that I think is really fascinating, which is that sometimes we derive conclusions about the nature of the world from what we know. A great example of this is when people believed that Newtonian mechanics was true, they looked at those laws of physics and said, "Well, the world is a clockwork universe. If you know the initial conditions of particles, their positions and velocities, you can predict exactly what will happen. So everything's deterministic." That was not so much something that people directly observed; it was just a conclusion they drew from the facts known at the time. Then quantum mechanics came about, and people said, "Oh, no, that's not true. Actually, probability is inherent to the fundamental nature of reality." Now, of course, maybe that will one day change. I think people are pretty confident that probability is actually fundamental, but who knows? There could be some new theory of physics that lets us see how it's actually not probabilistic. It's really hard to say, but it's sort of where we want to draw sweeping grand conclusions from the facts we have. Sometimes those conclusions could actually be extremely wrong, even if our facts are close to accurate. Newtonian mechanics works really well, but the conclusions might be wrong.
SAMUEL: Yeah, and certainly. The interesting thing with Newtonian mechanics is, even though you're right, it's not entirely the way the world works. My sense is, if you're a civil engineer building a bridge, you're not using quantum mechanics because at that scale, it doesn't make any sense. You're actually using much more of the Newtonian bits of knowledge. But I think what you're talking about is when we have these new theories that become the cutting-edge version of knowledge, we then use that more broadly, even if you're outside of the field of physics. It's kind of this larger framework for thinking about the world.
SPENCER: Yeah, exactly. It influences people's view of the world itself, or how things operate.
SAMUEL: So I was going to say this actually reminds me a little bit of how people have thought about the brain and the mind over time, where oftentimes the brain is described by the cutting-edge technology of the day, like, I don't know, people saying, "Maybe the brain is like a steam engine, or the brain is like a clock, or the brain is like a computer, or whatever it is." The truth is, the brain is like a brain. Sometimes we can draw analogies to those other kinds of things, but it's kind of its own thing. For me, I think these theories are kind of theoretical frameworks for understanding the world. They're very good. They can be useful as mental models. But as long as you have a set of mental models and don't necessarily anchor on, "Because this is the newest thing about how this specific instance of the world works, we're not going to draw grand, sweeping conclusions." For me, it's less about the way in which science is changing, and more about how cutting-edge scientific discoveries are filtered into the popular imagination. To be honest, I'm not really sure how best to think about that. It's almost like some sort of cautionary tale, because these ideas are really interesting and powerful. We have to be even more aware of how we use them. For example, the idea of evolution is an incredibly powerful model for thinking about lots of things outside of biology, but it's still ultimately, in many cases, a metaphor or an analogy. It can be useful for giving you some insight, but you don't want to push that metaphor too far before it bends and breaks. So for me, it's more about having a lot of mental models and holding them a little bit loosely, especially the farther away you are from the specific area in which they were developed.
SPENCER: Yeah, that's fascinating. It does seem like certain scientific developments lead to new understandings of society that are loosely based on scientific development. You go from evolutionary theory to Social Darwinism, which is not an implication of evolutionary theory, but it draws on a lot of the metaphorical ideas of it. Or you have the new developing field of economics, and suddenly new social systems are constructed, and people say, "Ah, well, look, economics proves that the optimal society is like this or like that," which is sort of true, but it's quite a leap from what the theory is actually telling you to the social view.
SAMUEL: Yeah, I think this is kind of the idea that even in those specific fields, they're probably an approximation of the real world. As you get farther away, that analogy and equation of reality and this model are going to break down. I view it as the way in which people talk about models more broadly, which is that every model is some sort of simplification. All models are wrong, but some are useful. I think there's a quote about that. The idea is we have to think about all these as models. The question is, how useful is it for gaining some understanding of the specific area of the world that you're trying to understand? Sometimes the answer is, "Okay, it actually is pretty useful." Other times you're like, "Okay, this is a fun thought experiment or a fun kind of analogy, but it could be so oversimplifying that it could be useless or verge on dangerous."
SPENCER: We've talked about different reasons for potential skepticism about so-called facts. What do you think the right attitude is to have in light of this?
SAMUEL: For me, going back to what I was saying earlier about the idea that ideally, we are asymptotically approaching the truth. I would say recognize that scientific knowledge is in draft form, but also that, as an endeavor, we are acquiring knowledge about the world better and better and getting closer. We need a certain amount of epistemic humility when we approach these kinds of things, the idea that we don't necessarily know everything, and that's okay. You see this both in the realm of scientific knowledge and in the world of technology, where we think, "Okay, if we apply our rationality to understand the world, especially for systems that we ourselves have built, we should be able to understand them." Increasingly, especially with AI, it is becoming more and more clear that we don't fully understand these systems. They are incredibly complicated. The computer scientist Danny Hillis referred to us moving from the Enlightenment, when we applied our rationality to understand the world around us, to the Entanglement, where everything is so hopelessly interconnected that we can't fully understand the world around us. Whether or not that's entirely accurate, I do think we are building systems that we can't fully understand. More broadly, when we think about scientific knowledge and the nature of the cosmos and the world around us, whether it's our technological systems, we don't need to be in one of two states: either perfect understanding or complete ignorance. Most people, when they see knowledge being overturned or certain systems or situations we don't fully understand, they immediately think, "Oh, we don't fully understand it; therefore, we don't understand anything." The truth is, that's not how it works. There is a whole lot in between complete understanding and complete ignorance. I think that is where, for most of human history, we lie. We have to recognize that we are moving closer and closer to better understanding, but we have to have a certain amount of humility, recognizing that we are finite beings. We have made unbelievable advances to understand the world around us, but there will be situations where we might not fully understand it. Whether it's thinking about certain ideas like mathematical chaos in terms of prediction, recognizing that certain bits of knowledge in the realm of science have been overturned over time, or trying to understand the systems we ourselves have built in technology, we will fall short every now and then, and that's okay. For me, it's much more about having that in my mind, at least, almost like a refreshing sense of humility that I think will be most productive in terms of understanding the world going forward.
SPENCER: It seems to me that science denialism can latch onto this, where it's easy to point out examples where scientists were terribly wrong. You can always find examples of fraudulent science. In any large enough field, there will be someone who is a fraud. You can find examples where scientists overclaimed, made it seem like they knew something and didn't. If you cherry-pick these examples, it's pretty easy to paint a picture of, "Well, we can't trust science. We can't trust scientists." Therefore, usually what they leap to is something that we have way less evidence for.
SAMUEL: I think part of the problem there is, if you think of science as big T truth, and then we notice some cracks in it, suddenly people think, "Oh my God, we thought this food was good for us, and now we think it's bad for us. How can we know anything?" You kind of just throw your hands up in despair. For me, it shows that ultimately, science is done by people. It's a deeply human pursuit. Scientists are humans, and we are imperfect. The fact that science is this endeavor, this rigorous means for trying to understand the world better, means it will have hiccups and things that we overturn along the way, and presumably scandals and other kinds of issues. But it's a self-correcting mechanism. A healthy dose of skepticism is good; it's the unhealthy dose of skepticism that is bad. Trying to create that balance can be hard, especially if you're coming from outside the world of science. For me, we should trust science, not as a body of knowledge, but much more as the process of trying to get better at understanding the world. It's at the frontier where we know the least and where things will get overturned, but that's also where the most exciting things are happening. Recognizing that can be inspiring. Going back to the draft form of science, it's always in draft form. I think maybe this is a temperamental thing; certain people get really inspired by that and want to add to the body of knowledge and make it better. Some people, when they hear it's in draft form, get worried. I'd love it if we could take that scientific mindset, of how scientists approach the world as always in draft form, constantly improving, and find a way to export that to the larger population. I think that would be really healthy and exciting.
SPENCER: Yeah, maybe people are a little worried about putting the drafts out too quickly, because then again, that can feed into this. Well, you told me this, but it's actually not true, and then, actually, I think sometimes scientists have a negative reaction when colleagues get too much attention for something that's a really new paper, or something like that, when it's rather than the sort of ideas that have really undergone rigorous testing that still haven't gotten that much attention.
SAMUEL: Yeah, there's definitely a trade-off there, and certainly there are situations where maybe when you're trying to explain things for a popular audience, there's a tendency to try to remove all of the caveats and the nuance of science and the uncertainty, and say, "Okay, this is a new discovery. It's subject to the following conditions. There are a lot of things we haven't worked out yet here. We also need to address the further work that needs to be done." Unfortunately, that's harder to articulate when it gets filtered through popular media. sometimes. I'd like to think though that that's not entirely true. I'd like to think that society as a whole has space for nuance and recognition of these kinds of things.
SPENCER: Would you like to think that? Or do you actually think?
SAMUEL: I'd like to think that I can. Yeah, perhaps I'm being hopelessly naive and overly optimistic, but I don't think it's impossible to convey to a broader audience. Part of that, going back to the sense of excitement, is that there is excitement in the new and the novel and the uncertainty, and a well-crafted experiment, and all these kinds of things. The downside is that some of these things don't necessarily work out, or there are experiments that can't be reproduced, or whatever it is, but that's all just part of the process. For me, going back to what we were saying earlier, when you conflate science with the body of knowledge and the process of learning new things, that's when sometimes that messiness kind of breaks down. If it's just a body of knowledge, then you have to be really certain, and you can't talk about that whole process. But if it's a process of learning new things, then there's a lot of space for all that messiness, which for me is really exciting, but it's something when we conflate those things, then it gets a little collapsed.
SPENCER: Sometimes I can feel frustrated by both the science denialist crowd and the rah-rah science crowd, because they both seem to be perceiving science in stark, inaccurate ways. Clearly, the science denialists believe scientists are all lying to us, and it's all nonsense. Obviously not true. Then, the people who are rah-rah science want to treat science as much more reliable than it is. They often fail to recognize that there are pockets of science or scientists that are not doing what I would call science. If you think about this ideal of a process, that process is not always being followed by scientists. We hope that scientists are doing this process, but you can point to examples in history where it's just not happening.
SAMUEL: Yeah, and certainly, you want that process to actually be observed or used. It's not always happening, or people are making grand, sweeping claims without necessarily the data or the insight to actually support those claims. I think it's, ultimately — how would I say this — the fact that science has within it the mechanisms for self-correction makes it very powerful.
SPENCER: It does if it's followed in the right way. But that's why I think this breaks down. There are pockets of science at different points in time where it no longer has the self-corrective mechanism.
SAMUEL: What examples would you give for that?
SPENCER: To me, the best example is the period of maybe 10 to 20 years in psychology where a lot of the research was just not true. In fact, you could see this because you can redo the studies from scratch and you don't get the same answer. About 40 to 50% of the time, you do not get the answer they got if you redo the same experiment.
SAMUEL: Yeah. I think you're referring to the reproducibility crisis. One of the interesting things, actually, is that in my book, Half-Life Facts, I mentioned this idea around where we want people to actually try to reproduce experiments. By and large, you don't really get as much credit for just reproducing someone else's experiments. I was kind of defeatist there, thinking maybe these things are not going to happen. Since I wrote that part of the book, I have been gratified to be proven wrong. A lot of people have done really good work around trying to reproduce this and clean up parts of science where there was more messiness or lack of reproducibility. It seems we are increasingly getting better mechanisms in science, and certainly, sometimes it'll take a little bit of a detour, and things will take longer.
SPENCER: And a longer time scale.
SAMUEL: When you take a step back and look at the larger picture, even if you go back to when people were doing things around the miasma theory of disease or the luminiferous ether, those ideas have been overturned. There was a long period where we had those ideas. But if you take the longer view of science, it does seem to have these self-correcting mechanisms.
SPENCER: I think it's worth asking, what makes science work? What are the features that are necessary and what are the optional aspects? It's optional, I think, to have academic journals work exactly the way they do today, but it's essential to have people able to figure out what exact experiment you did, and then they can actually redo it themselves and check that it works. If it doesn't work, they can tell everyone. You don't need it to work exactly the way it does now, but you do need some kind of mechanism where people are trying to redo each other's experiments and the information of whether they were able to redo it gets spread widely.
SAMUEL: Yeah, I agree. There are certain details of how science is done that are entirely contingent based on the development of the research university and things like that. You could still do science even without those kinds of things. But there are other kinds of things that are fundamental to that self-correcting enterprise, like the ability to share results and provide enough information that your work can be reproducible. You need a way of having this conversation across fields where people are building upon each other's work, so that you're not always starting from scratch. I think certain elements of providing credit can also be important, because in terms of incentivizing people to do certain kinds of things, the details matter. When I think about how we could redo science or expand how science can work, I think of this massive space of things that are valuable for science. The truth is that you often have this little subset area of things that are valued by scientific academia, like the things that get you tenure. There are still a lot of things that are really important for science, like building software tools, doing interdisciplinary research, or just helping people out in labs that might not necessarily get you credit, but are vital for the scientific enterprise.
SPENCER: I think the measurement issue you mentioned before is huge. A lot of scientific progress is kicked off by better measurement tools. Imagine how much you can understand about the human body when you don't have any tools to look at it, compared to when you get an MRI, a CAT scan, or an ultrasound. Suddenly, it opens the door to understanding things at a more granular and interesting level. In many areas of science, progress was limited until a new tool unlocked the next area of understanding, because it suddenly opens up a ton of new data that you can build new theories from and refute old theories.
SAMUEL: Oh, yeah. I definitely feel this is particularly true in biology, as we have better microscopy techniques for visualizing things within a cell. I remember reading the obituary of a well-known scientist, who might have been a Nobel Laureate, and it mentioned in passing that he discovered certain cellular organelles. I had thought they had been known for hundreds of years, since the advent of early microscopes. I was floored that this person, who had died when I was in college, had actually discovered these things. There are a lot of things that are sometimes newer than we might realize, presumably because of advances in measurement techniques. I don't remember the exact details of that case, but oftentimes, you have better telescopes, and then suddenly you're understanding new things about astronomy. With the advent of the radio telescope, we got evidence of the Big Bang. Before we had those kinds of tools, you could hypothesize, but they didn't actually have a way of measuring these things. At least, as far as I'm aware, maybe it could have been done. Overall, measurement tools and techniques seem to go hand in hand with increasing amounts of knowledge and increasingly better ways of understanding the world.
SPENCER: Yeah, it seems part of what they do is unlock a bunch of new data that you can then hold all the hypotheses against. They also often control situations much more cleanly, allowing for cleaner experiments where you can do a more precise test than you could do before. Imagine if you can control a laser; now you can do experiments you couldn't do before. If you can make that measurement at a really precise level, whereas if you can only make a vague measurement, it limits the sort of precision of your experiments.
SAMUEL: Yeah, I think that's also related to certain things around scale. The more precise you have, you can delve into deeper, more precise spatial scales. Even if you have better ways of measuring speeds or clocks, that will have an impact. It will have implications for how we look at smaller time scales. Then you have certain things with telescopes, like much vaster spatial scales or even temporal scales, where you're able to essentially look backwards in time because things farther away, the light takes a while to come to Earth. That increase in measurement technology, the more precision, also changes the kinds of scales we can even ask questions about.
SPENCER: Now, shifting topics a little bit, you've called for a new type of research organization. What kind of research organization do we need that we don't have?
SAMUEL: To be honest, I don't have a specific type of research organization. I just think we need to actually be expanding the kinds of organizations we explore. The way I think about this is when you look at the places research is traditionally done, it might be done in research universities, corporate industry labs, some types of university-adjacent independent research institutes, or sometimes even in tech startups. The truth is, those are just a few points in some weird high-dimensional space of potential institutions. We should really be exploring that and finding out if there are other institutional forms and organizations in this weird, high-dimensional space that could unlock different kinds of things. I mentioned before about all the space of things that are valuable for science, and here's the subset of things that get you credit to get tenure. We need organizations that incentivize all those other different types of activities. Over the past few years, there's actually been an explosion of new types of institutions where they're trying lots of different things, working more interdisciplinarily, as opposed to along department lines. They may be funding people rather than projects, or projects rather than people. They may be distributed, working in areas that are maybe not quite as able to be studied in traditional university settings. For me, I view it as this unbelievable Cambrian explosion of lots of new institutional forms. The downside of the Cambrian explosion is oftentimes there are a lot of extinction events. Some of these institutions might not necessarily last for the long term. To be honest, I'm not entirely sure which ones are going to last, but I love the fact that people have begun experimenting with new types of forms. We definitely need more, but there are already hints that there are some interesting things happening.
SPENCER: There are certain signs to you that we need more. Are there some things you would point to that say, "Hey, look, our research organizations are not doing the job they need to do, and there's a gap here?"
SAMUEL: So, there are certain kinds of gaps. There are certain things in terms of bridging the space between technological spinouts and basic research, where maybe that's able to be done in the university setting, but sometimes it's a little bit harder.
SPENCER: It's like building a startup based on some...
SAMUEL: Yes, and the truth is that's one example of maybe there's space for new types of institutions, and I've seen some stuff there. But there are also different kinds of fields sometimes, or there are new fields that may be harder to do within traditional academia, and so it needs something outside of traditional academia to act as a galvanizing force. For example, and this is an older one, it's not from the past few years. I think it's been around since the 1980s: the Santa Fe Institute, which is sort of the flagship institute for complexity science and studying large, complex systems. The idea behind it is independent of whether or not, obviously, biological systems are special in their own way, and social systems are as well, and different technological systems. It turns out if you abstract the details away and look at the interactions among all these complex systems and their components, there are actually interesting mathematical or computational insights that can be gained, and there are regularities that can be understood by looking at the similarities between these different things, as opposed to just staying within specific disciplinary domains. The field of network science is certainly one area that the Santa Fe Institute has been very involved in.
SPENCER: They do a lot of complexity science too, right?
SAMUEL: Yes, it's complexity science, network science, and a lot of these different kinds of things. Whether or not complexity science is truly a scientific field, I think having something like the Santa Fe Institute or other organizations outside of the traditional disciplines provides a place where people can explore in ways that jump across different domains and fields without necessarily having to fit as cleanly within a specific department.
SPENCER: Yeah, I've definitely felt the friction of crossing academic boundaries. For example, sometimes in psychology work, which we do a lot of, "Oh, I feel we need a philosopher here, because we're coming up against something where we're not even quite sure what we're measuring or what question we're asking." Philosophers tend to have more skill at that kind of disambiguation question than a psychologist typically does, where they're more focused on studying facts about the world. Another example is a philosopher who really wanted to understand Occam's razor, and in his quest to understand it, he started bumping up against mathematical problems and computer science problems. Much to his credit, he ended up having to teach himself a bunch of math and computer science. But now his papers are hard for his philosopher colleagues to read because they're written in these foreign languages to philosophy. The fact is, the universe doesn't care about how we divide up knowledge. We create very arbitrary boundaries. In this search to answer a question, you might cross multiple of these knowledge boundaries based on the way we divide things up.
SAMUEL: Right. Back in the day when everyone was kind of natural philosophers, they really didn't care about disciplinary domains. It was a lot easier. That being said, I think deep expertise is really valuable. But the ability to bridge different domains and bring people from different fields together to say, "Oh, you have something valuable to share with each other," is really powerful. If you can overcome the jargon barriers, you can learn something new. Sometimes this can happen within a single individual, like someone who's a little more of a generalist. People talk about the idea of the T-shaped individual, where the vertical part represents someone deeply steeped in a specific discipline, but also comfortable being the bar of the T, jumping across different domains. There's something to be said for cultivating people like that, whether they're generalists, T-shaped individuals, or polymaths. We need ways of allowing them to thrive in the world of research, and sometimes in our current organizational structures, it's harder for those kinds of people to do that.
SPENCER: When working on a PhD, generally speaking, you have to pick a point on the boundary of knowledge and say, "Okay, I'm going to become the world expert in that tiny little thing." "I'm the world expert in my PhD dissertation topic. But that's often just so narrow."
SAMUEL: That's the kind of thing. We've created the university structure and the PhD structure to really create experts. Going back to what I was saying earlier, that's good. I don't think we should get rid of experts; we need that. But we also need other ways of doing these kinds of things, or experts plus. Trying to find mechanisms for that is one of those areas where there's a gap in the organizational space. There's also the way people think about building research or allowing people to move back and forth between the world of research and non-research. For example, right now, if you leave academia in many domains, it's very hard to get back in. There should be ways of allowing people to do that. I mentioned the tech startup world, for example. You work on some research, then maybe you leave to work at a company, and then you come back to do more. That's hard to do. One interesting organization in the computer science realm is called Ink and Switch. They operate in the realm of human-computer interaction. They describe their operation as sort of like the Hollywood studio model. When you make a movie, you bring a ton of people together who work on that movie for three to six months, and then they go off to work on their next project. You bring them together for a set period of time, they work on it, and then they move on. That kind of thing doesn't really exist much in the research world, especially in tech, where there are a lot of top-notch talents who might not be enticed to say, "Okay, come here for five or ten years and work on some research." They might want to start a company or do other things, but they often have a few months in between, maybe after a company is acquired, and they're taking a few months off before figuring out their next thing. These people could then be enticed for a short period of time to come work on a project. Ink and Switch uses this studio model to allow people to engage with research who might not otherwise be able to do so.
SPENCER: Another topic that you've written about is this idea that we need a new philosophy of technology, or we need to think about it differently. What is the philosophy of technology?
SAMUEL: Very broadly, philosophy of technology is thinking about the nature of technology, how it grows and changes, and how we think about these kinds of things, how it fits into our society. For me, when I think about computing, and computation in particular, I think of it as not just a branch of engineering. It really is almost a humanistic liberal art that, when you think about it properly, touches upon language, philosophy, biology, art, and how we think in all these different areas. I think about it by analogy with the field of philology, which no one really talks about anymore, but it was an early humanistic domain devoted to studying the origins of words and their relationships. There's etymology, but in the process of doing philology, you also had to study history, anthropology, linguistics, and maybe even some archaeology. It was an all-encompassing field that eventually split and branched off into many different areas within the humanities. Maybe I really enjoy the idea of computer science broadly in computing, but I think that computing and computation have the possibility of being that kind of all-encompassing thing in a philological way. For me, that's one of the things I spend a lot of time thinking about. My most recent book, The Magic of Code, delves into how to think about code as this all-encompassing thing that is fun to think about. Ideally, if you have this broader perspective on computing and technology, it can give you a better and healthier approach to how you think about how technology engages with the human.
SPENCER: A lot of people view coding as a very technical subject. And computation is maybe even more technical. What's the case that this is actually a very limited way of looking at it, that it's much more humanistic?
SAMUEL: If you look at the history of computing, the people involved in computing might have been fans of computers and the actual details of computation, but they were also devoted to thinking about what computing allowed you to think about. Early on in computing, there were people thinking about the computer as a tool for thought. By the 1970s, Steve Jobs was discussing the idea of the computer as the bicycle for the mind. The idea was based on a Scientific American article that had a chart discussing the energy efficiency of different animals. Humans were kind of mediocre, while an albatross was more efficient. They also had a human with a bicycle, and suddenly the human with the bicycle beat all the other animals because we were much more efficient. His idea was that a computer is kind of a bicycle for the mind. Whether or not it's that, or certain ideas around simulation or education, computers and computing a code can be a very esoteric domain, but it's really in service of all these other kinds of things. There's a TV show, Halt and Catch Fire. Are you familiar with the show?
SPENCER: No.
SAMUEL: Okay, it was about a decade ago, four seasons long, devoted to the early personal computing industry. It follows a few different characters and moves through the early 1980s to the mid-90s. In the first episode, one of the characters talks about computers and says, "Computers aren't the thing. They're the thing that gets us to the thing." I think we've forgotten that computers are meant to make us the best versions of ourselves. It's not just cool tech and gadgets; there is a lot of that. Ultimately, it's in service of making us the best versions of ourselves.
SPENCER: I think a lot of people check Twitter all the time and don't feel that way.
SAMUEL: Correct, and I think that's because we've forgotten that. We've slouched into a really bad version. When I think about the current conversation around technology, it feels very broken. We talk about being adversarial towards technology, being worried about it, or being ignorant about it. Based on things like Twitter or social media, many of those concerns are valid. But when I think about my own experience growing up with computers, there were certainly concerns, but there was also a lot of wonder and delight. There was SimCity, the early Macintosh, the Commodore VIC-20, fractals, and all these weird screensavers. These were spaces for wonder and delight. It's not to say that computing is one or the other, but it should be broader than the current conversation, which has narrowed the view of technology to something we need to be wary of. We should be. I have many concerns around certain technological advances, how we use smartphones, and certain things around AI. But if we say, "Okay, rather than just taking things as they are or predicting where these technologies will go," let's figure out how computing can make our lives more fulfilled or meaningful, and then work our way backwards to ask, "What those technologies would look like?" I think we're doing some really interesting things. If you look at the history of technology, many people thought about that, and I feel like we've lost some of that driving purpose behind building these things.
SPENCER: Yeah, it seems today people tend to view computers and smartphones more around either efficiency, it makes you more productive, or time wasting like, "Oh, it's what I do when I have a few extra minutes of free time, and I just want to zone out, or what I do after I'm tired from work or whatever." So it's an interesting reframe to think of it as how do you use technology to give you meaning? What do you think some of the best uses of it to bring meaning are?
SAMUEL: When I think about when people talk about AI, people will sometimes say, "Oh, it's great that it can do all these things, but here are the following things that only humans can do." And so that kind of makes it special. Of course, 10 minutes later, those things are out of date, and it reminds me a little bit of this idea within theology of the God of the gaps, this idea that what does God consist of? It consists of all the things we can't yet explain about the universe. As we learn more about the universe, that definition just evaporates and vanishes. I think we do a similar kind of thing when we think about the uniqueness of human beings, whether or not it's comparing us to other members of the animal kingdom, like, "Oh, we're the only ones who can use tools, then we suddenly realize that other animals can use tools," or the same kinds of things with AI. For me, it's much more about what is the quintessential nature of humans, what are the things that make me feel fulfilled and meaningful? Certainly, one thing that I think is useful is I like thinking, and thinking is interesting. Trying to roll ideas around in my mind is really interesting. Certain tools, like AI tools, allow me to stitch together different ideas or surface papers or articles or ideas that I otherwise would never have known about because of jargon barriers or just because the world has so much knowledge. I think those kinds of things are really useful. I also think about the ways in which we use technology to bring us closer to nature or to other people, not in the sense of, "Oh, social media brings us closer to people. Human connection is key, and we're going to gamify human connection." There are people building almost like physical computing, where you have pieces of paper being manipulated on a table, with some sort of projector and camera projecting things, and the camera is reading it. It's very physical and feels very tangible. The people working on this, like Dynamicland, are working on some of this kind of stuff. There are people building a product called Folk Computer as well that are working on similar kinds of things. Not only is it deeply tangible, but it is also very communal. There's something to be said for rather than a traditional computer, where everything is through a screen, which is a very isolating experience. When you're in a physical space and manipulating things, it has the potential for actually bringing people closer together because they are physically closer together. There are those kinds of examples. I'm sure there are others as well. One example that I find really interesting is the creative coding movement, building little computer programs that are artistic. I think that is a new form of creativity that really was not possible before computers came around. It's the idea that you take the very nature of the computer seriously, which is this thing that might not necessarily be that smart but can do a huge number of calculations much faster than any human being could ever hope to do. When you combine that with certain bits of math, you can make really cool images and animations, which gives us a new window into thinking about art. I think there are lots of different windows and ways of doing this. I don't want to say that there's any single way that people should derive meaning and purpose. One of the interesting things about the human condition is that sometimes we have to struggle and figure that out for ourselves. What do we find most meaningful? I do think that no matter what answer you find, there are ways of using computers to help enhance that rather than diminish it.
SPENCER: It seems with AI, in theory, we could be unlocking new ways of being creative with computers. In practice, I don't know how much it's doing that because if you just say, generate a beautiful photo of x, y, z, and then it generates a beautiful photo, it doesn't really feel like you've been creative. It feels like you talked to an artist and they did all the work, rather than you. Whereas if you could give a two-page description of exactly what you wanted, and then it produced something to your specifications, maybe that would feel more creative. How do you think of these kinds of ideas interacting with the new AI technology?
SAMUEL: Certainly, with the image generation technologies, there can be interesting discussions and debates about what part is actually creative and what is not. I do think a lot of these things raise the abilities of people who might not otherwise ever be able to generate those kinds of things. For example, if I'm not an artist or not trained as an artist, but I can begin to find ways of taking a somewhat vague idea in my mind and instantiating it into an image, that's actually pretty powerful. The fact that in the course of figuring out how to use these, when people talk about prompt engineering, it forces me to learn about the types of keywords I need to use that are descriptive of different artistic genres and historical moments in art or painting. It teaches me a whole bunch, and I think that's really interesting. One of the other things that I think is really powerful is the way in which we're also democratizing the generation of software. Historically, software has been the domain of professional programmers or software developers, and those are the only people who can build these kinds of things. When it's only professional, the only kinds of software that get built are the kinds that can be used for a huge number of people. Anything less than that doesn't seem cost-effective; you can't just build something for a small number of people or just for yourself. You can if you're a programmer, but if you're trying to build something more, it's very hard. The novelist Robin Sloan has this essay titled Hoping to Be Effective: An App Can Be a Home-Cooked Meal. This idea is that cooking doesn't need to be industrial size or at the restaurant level or at the stadium level for a large number of people; you can also make a meal for yourself or your loved ones. The same should be true for building software. You should be able to build software for yourself or for your family. For a long time, people have tried to build no-code or low-code solutions, and there have been interesting moments throughout computing history. I think with the current moment of generative AI that allows for vibe coding or other kinds of spinning up these things, it allows people to go about their day. If they are not programmers and notice certain needs in their lives that could be solved by software, they would have to shut down that portion of their brain because they couldn't build it. But now they actually can, and I think that is enormously valuable, powerful, and democratizing. Maybe it's not quite as creative in certain ways, where they might not struggle with an algorithm or how to describe things, but overall, it's an unbelievable good for unleashing that sort of creative power and idea generation of actually being able to build software for whatever you might need.
SPENCER: An idea I've been experimenting with is trying to use these song generators not to make a song that's good, but to make a song that's sort of the perfect song for my brain, and just going through an iterative process of continuing to tweak it with the AI tools, using how much my brain enjoyed listening to that. I've been pretty impressed with how well I could begin to optimize it. It's funny because I've played it to other people, and they're like, "I don't like that." I'm like, "Oh, that's fine," but even still, the tools feel somewhat limited; they don't feel quite powerful enough, but it feels like they're beginning to get there where you could do that kind of thing.
SAMUEL: Yeah. This is the whole idea; you now have the potential for, whether it's building software or creating songs or whatever it is, we can make things bespoke for ourselves. It can be the kind of thing that speaks to you. If I have an idea for a computer program that really only is interesting to me, I don't have to worry about the size of the market or things like that. I can now just build that kind of thing, and that's really exciting. The same thing with the songs; I can build a song, you can build a song just for yourself that tickles your brain just right. You don't have to worry about whether the song has mass appeal. In fact, it sounds like you kind of don't want it to have mass appeal. You want it to have the appeal of an audience of one, which is great.
SPENCER: Yeah. It's kind of a fun idea that everyone could have their own perfect, bespoke song or image, and it could be something sort of special to us. Do you think in this world of vibe coding, where AI can do more and more, that people shouldn't even bother learning to code anymore? Do you think it's becoming antiquated?
SAMUEL: I don't think it's quite becoming antiquated. I think you can build sophisticated software without necessarily knowing traditional coding tools, based on examples I've seen out there in the world. I definitely think that if you have more coding knowledge, it actually makes it that much more powerful. My sense is that the people who actually know how to program are the ones able to build things really fast and powerfully because they're able to combine the supercharging aspects of the AI with this deep knowledge. I also think about coding as a traditional view of programming, which is a really interesting aspect of computational thinking. There's something to be said for it. In the same way that people might use calculators, there's still something to be said for learning arithmetic. I think maybe there's the same thing with coding, but at the same time, what coding consists of has also been a moving target throughout computing history. We haven't had modern digital computers for that long, but even so, when I learned how to program, I didn't learn how to flip switches on a computer or plug things in or use machine code or binary or even assembly code. I learned various higher-level languages. The truth is, the languages I learned are not necessarily the ones that people use now. Of course, vibe coding is a different sort of thing. Ultimately, they're all part of this process and larger tradition of taking some idea in your head and being able to instantiate it in a rigorous way into software. What coding consists of is going to change. It already has changed. I do think there's something to be said for learning certain traditional computer programs because you learn interesting things about language and computers and how they operate. I think you'll be a more successful user of some of these generative AI tools when it comes to software. But coding has always changed, and I am very hopeful and excited about that constant shifting view.
SPENCER: And I think you're right that with the current AI models, you really get more out of coding if you already understand the basic concepts and can actually code yourself, because they might hit a wall, and you really need to get yourself unstuck. You have to understand what's going on, or you may need to make fine-tunes yourself. But obviously, this stuff is moving really fast, so we don't know what's going to be like in two years exactly, but at least at the state of the art, I think that's true. You've mentioned before that you spend your time trying to catalyze the adjacent possible. What does that mean?
SAMUEL: Yeah, and so the way I kind of think about it, the adjacent possible is this idea first developed by the scientist Stuart Kauffman, which is the idea of what is possible is dependent on the current state of the world in terms of technologies, for example, or the adjacent possible of scientific knowledge is dependent on what we already know. The idea is that if we can kind of get to what is possible and make it actual, then, of course, you can keep on making more things, and then it opens up new realms in this high-dimensional space of potential inventions or technologies, or whatever it is. When I think about catalyzing the adjacent possible, the idea is in the process of catalysts and enzymatic reactions is really kind of like lowering that activation energy, making it easier to find things in that kind of adjacent possible space and make them more likely. This goes back to what we were talking about, the high-dimensional space of possible institutions. There's also a high-dimensional space of potential inventions and technologies and ideas. We need better ways of exploring this high-dimensional space. That being said, I don't necessarily think it's that. It's not like there's a map of, "Oh, I want to get to this kind of thing, and therefore I need to find faster and more efficient paths." For me, the way I view this aligns more with the great book by Ken Stanley and Joe Lehman called Why Greatness Cannot Be Planned. Their argument is basically that when you have a high-dimensional space that you're searching, trying to get directly to some end goal is actually a really bad way to do this. Instead, you should think about it as optimized for interestingness or novelty, and then take those things that you discover or new advances you make, these stepping stones, and then productively recombine them. This also means that you're often going to end up with technologies and ideas that will be used in surprising and unexpected ways, as opposed to saying, "These things are clearly being developed for this specific reason." For example, the vacuum tube was developed far before the ideas of the modern digital computer. They ended up being repurposed and used for early digital computers, but it was because they were used in areas around audio processing and sound processing. Eventually, they were repurposed for certain things. I could even be getting that wrong, but oftentimes we just need a larger space of stepping stones to all be used or to be out there. For me, when I think about cataloging the impossible, it's more about how can we get more people doing really interesting kinds of things? This also relates to my role at Lux Capital, a venture capital firm. Venture capital is not about being one of the ones innovating. We're essentially midwives to innovation, making innovation that much more possible. That's the way I think about my role more broadly in terms of science and technology: how can we get all these ideas to become reality that much faster?
SPENCER: And what are some of your initial conclusions about how to make it faster?
SAMUEL: I am far from systematic. Going back to the idea of not having a systematic path, I think it's much more about finding things. I would say there are two kinds of two paths. One is just finding things that are really interesting and trying to highlight them. I feel like I spend a lot of my time connecting people together and being like, "Oh, you're doing something interesting. You're doing something interesting. You guys should be talking." This goes back to what we were talking about in terms of the interdisciplinary approach or overcoming jargon barriers. I think there's a great deal of power in acting as connective tissue between different ideas. I spend a lot of my time doing that. But I also know that when it comes to thinking about the future, it's really hard to make predictions. I know I personally am bad at it. I had this memory of when my first book came out. This was years ago now. I was so glad that it came out, I think in 2012, because I was certain that very soon in the future, all books would be digital, and there would be no print books. I'm so glad I got my book in print before everything became ebooks. Of course, I was wildly wrong. For me, it's much less about trying to predict the future and more about what is the kind of world that I want to live in? Going back to what we were talking about, computers should be really for people. I think there's something to be said for taking some time to think about what is the world that I want to build, and then how can I make that more likely? Coupled with this connective tissue for lots of different ideas, I think those are the different ways I think about that kind of thing.
SPENCER: Some people think of innovation and creativity as happening through three methods. One is replicating. You're taking something that is already out there, and you're putting a little tweak on it, maybe making it slightly better for some purpose. The second is remixing. You're taking two or more ideas and saying, "What if we took this property or this idea and that property of this other idea and squished them together?" The third is reusing, saying, "Hey, we're doing this thing over here. What if we did it over there?" We use this idea in this other place. You can find lots and lots of examples of replicating, remixing, and reusing. Do you think that captures what innovation is, or do you think that's overly simplistic?
SAMUEL: I definitely think it captures a lot. I feel like one of these things, the remixing and the sort of creative recombination, really is a very powerful idea in terms of how we think about technological advancements and innovation more broadly. I'm sure there are things that feel entirely novel, but more often than not, there is this kind of combinatorial approach. The more that we can put things together or figure out new uses, I think it actually is really powerful. One of the reasons why I think about the history of technology a lot is that in the tech world, a lot of folks in Silicon Valley are kind of ignorant of tech history, sometimes proudly so, because it's like, "Oh, only the new thing matters." The truth is understanding the path dependence or the contingency, or finding the things that people have already done in the past, and then figuring out ways of why they were not quite ready then, but maybe could be reused now, or remixed, or whatever it is, that's really powerful as a means for innovation. So, yes, I definitely think what you're talking about rings true as very powerful features of innovation, which is why, when I think about tech, we need more historical knowledge of technological history as well.
SPENCER: It's fascinating to think about cases where technology regresses. Instead of always thinking of technologies moving forward, sometimes it doesn't. A great example is supersonic flight, where we had supersonic flight and we lost it as a society, which is really crazy. There's a company, I think it's called Boom Supersonic, trying to bring it back. It's really crazy that sometimes these things go backwards. Do you have thoughts about why we sometimes lose technologies?
SAMUEL: In that case, my sense is there were certain things around safety or legislation that kind of limited the Concorde.
SPENCER: I think the original Concorde and those other supersonic flights were government projects. Some people have said that the fact they were government projects and not designed to make money created a situation where they lost tons of money, and eventually the governments decided to shut them down. They were losing lots of money. It wasn't like building an actual sustainable business model.
SAMUEL: I don't know the details of that one. I know there were certain things around legislation limiting supersonic flight over land due to sound and things like that. Even though I think better technologies might change that. I don't know all the details, and I'm probably getting some of them wrong, but at the same time, I don't think we ever lost the technologies in terms of them being used in our everyday lives. We didn't lose them as a society. It wasn't like we forgot how to make supersonic flight.
SPENCER: Obviously, on some level, in physics, we still understand it. I heard a rumor that people from the old Concorde projects were the only ones who knew how to do certain things because they spent a decade figuring it out, and now the one guy is 85 years old who figured this out.
SAMUEL: That's interesting. It does remind me, actually, in the realm of tech, and certainly computing more broadly, of the whole legacy system and legacy code where you have systems built upon things that might be decades old, where the people who built them are long retired or even dead. Someone told me a story about working at Los Alamos National Labs, where they do large-scale competition simulations around nuclear explosions. He said he would come across, not infrequently, large chunks of code with comments saying, "Do not touch this. We don't know what it does." I definitely think there are situations like that.
SPENCER: I think they're often written in Fortran and, "What the heck is this? We've been using it for 20 years."
SAMUEL: "And we're not going to touch it. It does the things that we need." In terms of knowledge loss, I think it more just speaks to the fact that, by and large, the paths of technological advancement are far more contingent than we might realize. We kind of think of technological progress as a wave that washes over us, but it's not. It's dependent on people making individual choices to keep on advancing things, and making choices to update software or not update software, or to continue building things in certain ways or not. For me, that provides a sense of technological progress. It's not this force or thing you pour over; innovation is something we have to choose to do as a society. It's the accumulation of a lot of different individual choices and innovations and advancements that move things forward.
SPENCER: As a final note, we've talked about a lot of different topics today. Is there anything you'd like to leave the listener with?
SAMUEL: I would leave just with a sense of wonder and delight when it comes to science and technology more broadly. I think we are in a moment where people can sometimes be concerned that scientific advances are being overturned, and maybe we don't understand as much as we do, or technology is a great cause for worry or despair. There is still so much to delight in and to be excited by. Maybe this is my own temperament, that I have a more optimistic and excited view of these things. I just want people to remember that all of that is part and parcel of scientific and technological advancement.
SPENCER: Even the idea of the half-life of facts — there's a framing on that, "Oh, my God, we don't know anything, and all of our knowledge is subject to overturning." Then there's an opposite view that says, "Isn't that amazing that we keep learning more and that we keep understanding things at a deeper and deeper level? Think even in our own lifetimes, we're going to understand new things." So do you want to be excited about it, or do you want to be sort of terrified by it?
SAMUEL: Exactly. I hope that we, at least most of the time, choose the excitement.
SPENCER: Yeah, thanks for coming on.
SAMUEL: Thank you so much. This was wonderful. I had a great time.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: