Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
July 30, 2021
How can we accelerate learning? Is spaced repetition the best way to absorb information over the long term? Do we always read non-fiction works with the goal of learning? What are some less common but perhaps more valuable types of information that can be put on flash cards? What sorts of things are worth remembering anyway? Why is it important to commit some ideas to memory when so much information is easily findable on the internet? What benefits are derived from being involved in all stages of a project pipeline from concept to execution (as opposed to being involved only in one part, like the research phase)? Why should more researchers be involved in para-academic projects? Where can one find funding for para-academic research?
Andy Matuschak invents tools that expand what people can think and do. His current research focuses on a new written medium which makes it much easier to remember what you read. In previous roles, Andy led R&D at Khan Academy and helped build iOS at Apple. You can read more about his work at andymatuschak.org and follow him on Twitter at @andy_matuschak.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Andy Matuschak about spaced repetition, new media for information retention, creative insight, and research incentives and funding models.
SPENCER: Andy, welcome, really glad to have you on.
ANDY: Thanks, Spencer. I'm excited to chat.
SPENCER: We have so many interesting things in common, and I'm excited to discuss some of the things we overlap on. One of those is on how we accelerate learning, and in particular, spaced repetition. So, would you like to set that topic up for us, and then we'll dig into some of the fun details?
ANDY: Sure. A really common experience that I have — and maybe that you have — is that I read a book that I found super interesting. I just read Religion Explained, and I was so fascinated by all of these theories about why people consistently seem to believe in certain kinds of rituals or attributes of deities around the world. And the author had a bunch of theories about this. And then I'll try to bring these things up in discussion a few weeks later, and suddenly, I'll find that "Boy, I can't remember any of the key details of this book at all anymore."
SPENCER: I hate that experience so much, like, "What's that book about?" And you're like, "Ahhh! I think I remember one sentence or something". [laughs]
ANDY: [laughs] Yeah, exactly. It sounds kind of silly, and it's true that in this particular instance, maybe I don't actually need to remember the details of the author's arguments. But another thing I'm doing right now is trying to develop my background in cognitive science much more seriously. I'm digging into dozens of papers and trying to really deeply understand the experimental methods. And I really do not want that effect to happen with those papers. I really actually want to remember exactly how those experiments were carried out. And so, a thing that's very interesting is that cognitive scientists actually understand a great deal about how memories are formed. There are fairly consistent patterns that describe these dynamics. And there's almost just an algorithm that you can do a series of steps to help you reliably commit something to memory. It's just that standard behaviors when you're reading a book, talking to someone, or doing normal work don't necessarily carry out those steps. And spaced repetition is a way of doing that in a fairly systematic way, in particular, instantiating those steps in software systems that help you carry those things out. So that maybe when you read that next book and then try to discuss it at a party, you don't feel quite so flat-footed.
SPENCER: One of the first blog posts I wrote for my blog was called something, "Do we really read nonfiction to learn?"
ANDY: Because it's so silly because so many people spend so much time reading the information, that's not just entertainment, at least they don't think of it that way, right? It's one thing to read a wonderful fiction book that is a great experience for you. But we read these boring nonfiction things, presumably because we want to learn, and then so few of us take the basic steps to actually consolidate that learning. And so we forget almost all of it, and then we go to the next nonfiction book. Whereas with just a little investment, we could retain way more of it, and it would be dramatically more efficient. And that's where I see spaced repetition coming in as part of the solution to that problem.
SPENCER: Right.
ANDY: Right. So I think one interesting question to ask is, "how cheap can memory become?" So, I think very sensibly, some people don't want to remember all the fine-grained details from some random article they read on a Sunday morning if it will be expensive to do so. But if it could cost you, say, 10% additional time on top of the reading time, then is it something you consider?
SPENCER: Yeah, that just seems like such a good investment: only an additional 10% to remember instead of going to read another article, which would double the time.
ANDY: That's right. And we don't necessarily have things as efficient as 10% right now. But it's not clear that we can't achieve that either. And the second thing that I wanted to point out is that when I discussed this problem of remembering books, many people ask the same question you asked — namely that people really read nonfiction to learn, but they ask that question and answer in the negative. And they say, "No, I'm not really trying to learn that kind of thing." And I think these people are not wrong. A couple of kinds of learning that I think are very useful that can happen from nonfiction are, for instance, imbibing a set of cultural norms or values. You get to see how a different mind engages with a problem. And maybe you aren't interested in the specific conclusions that person comes up with, but seeing their mindset and approach is fascinating. And one thing that I say when people bring these kinds of objections up is this kind of a space of things like this that aren't factual. But what you can imbibe from a book is the same techniques you can use to better absorb factual learning. You can also use it to better absorb this kind of semi-intangible things like increasing the salience of a particular idea or inviting values and norms. It's just not as obvious how to do it.
SPENCER: Yeah, it's such a good point. Because when we say learn, maybe the thing you described is sort of an intangible thing. Maybe that is a form of learning. And people then will say, "Well, that has nothing to do with memorizing." I think my experience is people immediately want to talk about memorizing where it's useful, but in a sense, all learning involves some form of memory, right? And it's sort of, "What form of memory we're talking about?"
ANDY: Exactly. That specific point is something that has been very generative for me. When you start behaving differently and understand something that you didn't understand before, say, emotionally laden or conceptual, what exactly do you think has happened? If not memory, what other cause could there be? I mean, it's true that perhaps we're fight-or-flight responses. There may be different parts of your neurophysiology driving that. But for things like absorbing cultural norms and values, salient things like this, these really are driven (I believe) by many of the same consolidation mechanisms, which allow you to learn labels in organic chemistry.
SPENCER: Yeah. It seems to me that there really may be different memory mechanisms. I think you're suggesting that one of them is explicit knowledge. I think that's usually what people think about like, "Oh, I know the number of states in the United States," or something like that. Just like a fact, that's very explicit. Then there's a memory of a concept. It doesn't really feel the same in our minds. The idea of philosophy or something like that doesn't feel like knowing what philosophy is. It doesn't feel like knowing how many states there are. I don't know how quite to describe the difference. And then there seem to be still other types of memory. For example, a chess master, in some sense, in their memory, are all these patterns of what boards look good. But that feels different, still. That kind of memorizing patterns just are almost like the fast machine learning algorithm their brain has been trained really well.
ANDY: Yeah, that's right. So the second category I mentioned is often called implicit memory, where you behave in a particular way that's influenced by memory, but you're not explicitly aware of the memory. In contrast, this is something you try to remember, like, "Oh, what's that person's name?" You strain for the moment, and the name somehow bubbles up your conscious awareness. This kind of implicit memory, where you learn these concepts about religion, and then next time you're talking with someone about religion, you interact with them differently because your understanding is different, not because you're saying, "Oh, this is like that one thing in the book, and I remember where the author said X." But rather, just because your experience of reading the book has altered your conception of that topic, that alteration does seem memory-driven. And it's something that you can use the spaced repetition technique to drive.
SPENCER: Yeah, and then that third type of memory — that sort of very implicit automatic pattern recognition kind of memory — people might be really skeptical that you could use a system to train that. But imagine a set of flashcards with a bunch of little mini-chess problems on them, and you get faster and faster at identifying them. They could even be automatically generated if you want to get really sophisticated. You never see the same one, but you must immediately try to pick out the right move to make in this simple chess situation. And you can imagine that actually could train you to get better and better in a very intuitive sense.
ANDY: Absolutely. I think you're referencing this classic DeGroote experiment, where chess masters could be presented with a chessboard and then asked to reconstruct it on another adjacent chessboard. And they noticed that the chess masters could do this with way fewer glances and way less time looking at the original chessboard. As compared to novices, it was almost as if they had a more efficient encoding of the chessboard in their mind. They could hold more of it in their mind at once. And in particular, interviewing the chess masters, these very abstract concepts came up that had to do with these representations like lines of force. And so instead of thinking like, "Oh, Queen is here, Pawn is there," the chess master sees a board and says, "There's a line of force on this side of the board, and it has this particular color valence to it." That's how they see the board.
SPENCER: That's so cool. I was talking to someone who's a really long-time martial artist, and they told me they could see these kinds of lines of force in fighting, which I think was super fascinating. And I don't know how to take it, but they have this feeling that "Oh if I push the person in a certain way, they're gonna fall over." You know what I mean?
ANDY: I definitely believe it. And my representations of, say, complex software systems are much more abstract and almost ineffable than they were years ago when I'm thinking about how to architect something. These kinds of box and arrow diagrams start appearing in my head. And I almost have this notion that — when I was a kid, and I was learning to program, I was contending with syntax, like, "Oh, where exactly does the semicolon go?" And then I got that down, and then a unit level that I'd be thinking about was maybe the line. And I'll try to figure out how to write a line that does this. And I'll try to figure out how to write a line that does that, then move up maybe to the function. And then finally, these like really large architectural elements, and that — really does seem to be driven by a progressive consolidation of experiences and ideas. And as you alluded to, I think it's possible to accelerate that using memory augmentation systems.
SPENCER: Great. So let's get really concrete here. Could you describe the approach to spaced repetition? How does it work, and show us how that evolves over time?
ANDY: Sure. So there are two key ideas in cognitive psychology that enable these effective memory practices. And the first is this observation of the forgetting curve. So when you learn an idea for the first time, it seems to be the case that you will forget that idea over time, according to a power law. So you'll forget a whole lot of it on the first day, and then less of it on the second day, and so on. And so, say that after you learn that thing for the first time, you come across it again. Well, now, the second time you see it, you'll forget it a little more slowly. And the third time, a little more slowly than that. And you can, in fact, induce those second, third, and fourth times, and so on, by testing yourself on that concept. So a day or two, after learning, say a vocabulary word for the first time, you can say, "What is the Italian word for to run again?" "Okay, correre, great." And having come up with that, you will now remember it longer than you would have the first exposure. And so this implies that if you can intermittently test yourself on the pieces of information you want to learn, then eventually, the forgetting curve will slow down enough that you can retain it for a very long time. And so the second key idea that makes this very, very efficient, and not just very reliable, is something called the Spacing Effect. So probably a lot of you have had this experience of this test coming up, and you should study for it. And so you don't study for it. You wait until the night before the test. And then you stay up all night and study really, really hard. And then you try to take the test. And this is called massed practice, as you're amassing your study. And it turns out that this is much, much less effective than if you were to take the same amount of time and spread it over the preceding week. So instead of sitting for five hours the night before the test, you study for one hour each day. Your retention of that material will be much greater. And in particular, the optimal spacing seems to depend on your rate of forgetting. How thoroughly do you remember it in the first place? How long it's going to be until the next test? And a couple of other factors. When you combine those factors with the forgetting curve concept I mentioned earlier, you get this exponential back-off situation, where in many cases, the optimal way to learn and retain something can be to look at it once. And then a few days later, test your knowledge of that thing again, and then maybe a week later, do it again. And then maybe a month later. And then maybe a quarter later, until after maybe only four exposures, you only need to see this thing once a year. So for a total of maybe 10 to 15 seconds of practice time, you can really durably and reliably remember if that item has information for years to come.
SPENCER: Right. And it can be made even more efficient if, based on how well you did on each review, you change the spacing, right? So if you know the content really well and you get it right, then you can make a longer delay. If you struggle to get it right, then maybe you make a faster delay.
ANDY: That's right. So it can be personal to you based on how much you struggle to remember this thing. And so we're going to compress the schedule. And it can also be specific to the type of material. For instance, the system you're using may be aware that this material is difficult to memorize based on many tests and students' experiences, so it may modify the schedule accordingly.
SPENCER: Another important element here is this idea of active recall, right? Do you want to explain a little bit about what active recall is and why that matters?
ANDY: That's right. I only alluded to that when discussing the forgetting curve piece, but that is a really important element as practice. And so it really harkens back to another school-age mishap most people seem to engage in, which is, say, that test is coming up, and you want to prepare for it. The most common way that students study for tests is by rereading material. So they'll look through the textbook and look through their lecture notes. For each page, see, "Do I feel like I remember this stuff?" "Do I feel like I need to re-study?" It turns out that this is not very effective. And there's a variety of other things you might do, like writing summaries or highlighting. But one of the most effective things you can do is what you call active recall, and what's also often called retrieval practice when you basically cause yourself to retrieve the piece of information from long-term memory. And so, instead of simply rereading that the United States was founded in 1776, if you were to ask yourself the question, "In what year was the United States founded?" and answer, "Oh, yeah, it was 1776." The latter actually is much more effective.
SPENCER: Right. And you need, of course, to be corrected if you got it wrong. Otherwise, you're just going to re-encode a false memory.
ANDY: That's right. Although actually, several studies have suggested that even if you aren't corrected, retrieval practice may be more effective. It depends on just how unavailable the information is. But even attempting to retrieve it and failing can be more effective than re-study it.
SPENCER: That's really interesting. So one that I find fascinating about this research is it just seems so different than the way school works. I mean, I don't know about your school, but my school seems like it is almost the opposite. It was like, you'd learn some material, get tested on it, and then you'd never touch it again.
ANDY: That's right. Due to their credit, schools often do something that they'll call a spiral method, where there might be a test at the end of the week on that week's material. And that test might also include some stuff from the previous week, the previous month, and maybe three months prior. This is a relatively common practice. But you're right that typically, these tests are kind of spot checks. And so students cram for the spot check, and they do as well as they can. And then, because they did massed practice, they will now mostly forget that information, and it mostly won't matter. And also, they really struggle in many cases to learn certain things. Like many students trying to learn how to perform two-digit multiplication, for instance, there's a straightforward algorithm that you can perform. And not that I really want people's understanding of this topic to be entirely algorithmic, but even just memorizing the steps to perform is something that seems to be beyond many students in our current system.
SPENCER: Yeah. And that seems like just a straightforward thing, where pretty much anyone could learn to memorize it with a memorization technique, right?
ANDY: Yeah, exactly. I think this is a problem that could just be solved. And likewise, in chemistry class, when you're trying to learn all of these properties of various elements, I think that there's just a straightforward fix. I want to be clear that when I talk about this straightforward stuff, it really is just simple memorization, like you can learn your timetables more effectively. But the part I'm excited about is using these techniques to learn more complex conceptual knowledge. For instance, what exactly was the complex system which caused World War One, and how do we think about that with respect to our understanding of historical causation in general? This is often a part of history curricula. And it doesn't exactly seem memory-laden like there are certain facts that you need to learn, but forming this more conceptual understanding of what is historical cause and effect and how we think about that, it doesn't really seem like something that you can study with flashcards. But I argue that it is actually something that you can study with flashcards. There's a way of understanding how causes work. For instance, you want to look for effects that have no other good explanations for their causes other than a cause you're thinking about. That's one move that you want to play. There's a variety of moves that you want to play when thinking about historical causation. These kinds of things can be practiced in the same way that you can practice a date.
ANDY: I'm just delighted with the extent to which you and I are on the same page about how much more powerful these ideas are than most people acknowledge. So I've been using spaced repetition for (I don't know) eight years now, maybe. It's one that I just originally built for myself. I just used it for personal use to start, and this is a totally random selection of just cards in my flashcard system because I just want to give a taste to people of how different uses your flashcards can be than what you might think it is.
SPENCER: So just picking some random ones here. This one is on the links that have been found between people's philosophical views and their personalities — this is from a paper that I just read the other day by David Haden — This is one on if you're doing an experiment, how should you control for the baseline values of people's attributes like their age, gender, and stuff? And what kind of effects does the method of controlling have on the result? This next one is on a theory I came up with about myself when I was working with a coach. This next one is on why we have identities. It's about the four different reasons we have identities. This next one is on this interesting idea someone I know proposed on how we could change capital gains taxes to make them better for society, and so on. So the reason I'm reading these is just these are rich concepts. These are not the sort of things you think of putting on a flashcard. And yet, having done this for seven or eight years, the best usage I found for a flashcard system is putting rich concepts that I want to think about and try to understand deeply, and it connects a lot of other pieces of my knowledge. Do you have a reaction to the kinds of things I just read?
ANDY: I love it. I can't wait to talk about these more. This is something that I think is really underappreciated but also not all that well-understood. And so what I'm talking about is the cognitive psychology of retrieval practice, the Spacing Effect, and forgetting curves. What all of those papers are testing are things. For instance, English to Swahili word pairs is a common choice. And that's very different from the questions that you just read. The question that I find most useful in my day-to-day life is often more about increasing the salience of a particular idea or causing me to return to that idea again over time. I'll just give an example of the first that I wrote quite recently. I was in a conversation with a friend. She suggested that I might be making a particular kind of mistake in the way that I was interacting with people who were thinking about working with me — namely, that it's sometimes hard to ask other people to do something for you even if they are your employee. You're paying them (to do things) you don't like doing yourself, like some really grungy programming thing. That's just a failure mode; people like doing different things. And also, if they're kind of up and coming, they might relish the opportunity to do this thing that you find quite grungy. And then, even if they wouldn't like it, they are your employee, and you hired them for a reason, so you need to not fall into that failure mode. And so I wrote the following spaced repetition prompts: "What failure mode did that person suggest that I'm at risk of when working with other people?"
SPENCER: That's so useful, right? So you monitor that?
ANDY: Yes, and it's not exactly clear to me the dynamics of how this kind of prompt works. So we can try to characterize how well that idea is incorporated into my long-term memory in terms of how accurately I can answer that question over time if you ask me exactly that way. But that's not really what you want. What you want is behavior change, right?
SPENCER: Exactly. You want to behave differently in the world because of knowledge, information, or changes in your own neural network in your brain, right? And whether you can say exactly what's on the back of a card, if you're doing a flashcard, seems to be missing the point, right? Being able to say what's on the back of the card doesn't matter if it doesn't change your behavior in any way. And even if you can't say what's on the back of the card, you have somehow been altered by being quizzed on that. That might be enough, right?
ANDY: That's right. And I'm really not sure about these dynamics. So I have found, in my personal experience, that writing these kinds of prompts does help me change my behavior. It probably doesn't use the same kinds of forgetting curves used for, say, the cards I write about a neuroscience experiment. [laughs] I don't know what curves are appropriate. And I'm not quite sure what kind of feedback system is appropriate to guide this system. They also don't always work. And I don't yet understand the situations in which they do or don't. I'm curious if you have anything that you've learned about that.
SPENCER: I think it is a remarkably under-explored topic — and actually, this might be a good segue to talk about your really cool project orbit. Do you want to say some things about that?
ANDY: Sure, thanks. So this came at a great collaboration I did with my friend Michael Nielsen.
SPENCER: (Former podcast guest, if you want to go check out his episode, it's really fun.)
ANDY: (It was a great episode.) And so he and I had both been very interested in spaced repetition. We noticed that not just have schools not adopted this system, but also professional knowledge workers whose livelihood and success seem to depend on internalizing the material haven't adopted these kinds of systems. There are many potentially good reasons for this. There are lots of challenges to using these systems. But one of the challenges is learning to write these prompts effectively and learning what kinds of things are good to write about. And also, learning how to write the prompts — so as to create the correct effects — is very difficult. And worse, that difficulty is not super apparent when you start doing it. And so, most people who start working with these systems will write really bad prompts for themselves. And you won't even be clear that they're doing this. And so they'll just kind of give up. It doesn't really seem valuable. And so our suggested solution was, what if we can scaffold your understanding of how to write these good prompts by having an expert write them for you? And people have tried this kind of thing before — so, for instance, Quizlet is a very successful education technology company that works on the principle of shared collections of prompts — but those often don't work very well because they feel kind of automized, disconnected from any actual understanding of the material you're trying to learn. Say, quantum computing, you download the stack, and it's like, "What's a qubit?" "How many dimensions are in the vector space of a qubit?" And you look at this, and you're like, "I don't know." It doesn't really seem connected with your understanding of the material. So we thought we might try to solve that problem — the problem of these kinds of automized and disconnected shared problems — by interleaving the prompts into a rich prose narrative, like a really good explanation. And then you as the reader could have this experience of reading prose explanation. And then, every few minutes pausing and having this chance to review what you've just read using these memory techniques. But the prompts you're reviewing are now grounded in the terminology, the metaphors, and the narrative you've just been introduced to. So he and I experimented with this in the context of a quantum computing textbook. That project is called Quantum Country, and we learned a ton from it.
SPENCER: I highly recommend checking it out. It's so cool. It's a wonderful introduction to quantum computation through this new medium.
ANDY: Thank you, Spencer. We're still learning from it. But one thing that became clear really rapidly is that different kinds of writing, different kinds of audiences, different kinds of contexts are going to need different kinds of interactions and different ways of relating with this nascent medium — we call the mnemonic medium. And so, I built a system called Orbit to generalize these ideas from the Quantum Country, allowing us to explore these notions in a broader range of contexts.
SPENCER: One thing I like about what you're doing is it seems like a sort of never-ending research project where by releasing these different things out there in the world, and then you get all the data coming back, you're able to actually learn new things about how memory works, and how to best deploy these kinds of memory aids.
ANDY: Right. I'm very excited about that. And that really is the driving force for the work. So it seems first off to be this possibility of a kind of translational cognitive science where there are these fairly well-understood phenomena that we can maybe bring to more people by instantiating them in really compelling systems. And that in itself would be valuable. But one thing that I'm particularly excited about is this notion that the work may not just be translational and that actually, these systems may help reveal elements of the way that we learn, the way that behavior change happens, which cannot be observed easily in the context of laboratory experiments, or relatively limited academic systems. The questions actually need to be instantiated in big real-world systems that people are actually using in their lives.
SPENCER: Right. This is a major challenge in general when people design these kinds of cognitive science experiments, and they want to control every variable for good reason because it allows them to actually study some phenomena. But then you can have this issue of generalizability, which is okay. Hence, you got people to memorize French in an extremely controlled environment, but does that have much to do with how people actually learn in the real world? And what would it look like to deploy these for people who are actually using it to learn that kind of information that doesn't fit neatly into the standard structure?
ANDY: That's right. So I'll share one learning that seems quite interesting so far. So there are — as I've mentioned — a lot of studies about how rapidly people forget, say, vocabulary, words, definitions, or really just brute facts. And these are fairly pessimistic, but people quickly forget such things, even 24 hours later. Performance is often degraded by a third or half. But what we're seeing from Quantum Country actually looks quite different. And I think that is due to the conceptual nature of the knowledge. So there are some brute facts in Quantum Country, like the numeric value of this quantum gate. But there are also just a lot of conceptual facts, like, why does this need to be this way? Why is this relationship to that important? And the forgetting curves for these two sets of information look quite different. And in an encouraging way, people can retain the conceptual information much more easily than the brute factual information than you'd expect. But there's also an important mutually reinforcing element — or there appears to be — from the data we reviewed. Say, card one or prompt one influences your memory of prompts two and three, also. And one reason this is exciting is that it might mean that when you're trying to use these memory systems with highly conceptual knowledge, you can actually be much, much more efficient in scheduling the subsequent reviews. And the time cost for maintaining the knowledge may be driven down quite a lot.
SPENCER: The curve is already remarkably low, but it's amazing if you can drive it down another 50% or 70%.
ANDY: Yeah. We can talk about the time costs, actually. And this is something we've learned from Quantum Country that I haven't actually seen in literature, but I think it's closer to something that a typical person would want to know. The way I like to think about this is if you go to a taco area and order a burrito, then you say, "Okay, which kinds of rice do you want? What kind of beans and meat do you want? And then at the end, maybe some add-ons? If you want some sour cream for $1? Do you want some avocado for $2?" So it's this kind of bonus for your burrito that you can choose to add. I like to think about these memory systems in a similar way. You just spent four hours reading this introduction to quantum computation. Now, would you like to add remembering this information to your burrito? And what does that cost you? In the case of the median Quantum Country reader, it costs them something like 1/3 additional costs to durably remember this stuff for about six months following the initial reading. So it may end up being as much as, say, 50% additional cost to remember it for the first year, which will be much lower in subsequent years.
SPENCER: Right. So it'd be like 50%, and then just like a little bit incremental on top for the year after that and even less year after that.
ANDY: That's right. And so it takes most readers about four hours to read the first essay of Quantum Country, so it's not super expensive to durably maintain a memory of all the key information in that four-hour segment. But it's also not free. It's worth driving down, right?
SPENCER: It's an interesting question of how much they would remember without the spaced repetition prompts because my intuition is that they remember very little of it, like less than 5% or something. But I'm curious what your intuition is on that.
ANDY: Right. In fact, we've run through randomized control trial methods.
SPENCER: Oh, amazing.
ANDY: But I think they're not sufficiently controlled, or else some interesting things are happening. So the best data we have on this is for a one-month time frame, which is probably not long enough. But with a one-month time frame experiment, the median reader forgot about a third material. Now, because of the way that we ran this experiment, we didn't want to have selection effects for conscientiousness, basically. So we were choosing among readers who were still doing reviews regularly and conscientiously. We were just withholding some of the prompts from them. And this means there were almost certainly interference effects we're reviewing. Some of the prompts were reinforcing their memory for the prompts that we were testing. So probably, they would forget a little more than a third.
SPENCER: I see. That's tricky.
ANDY: I'm running some more experiments now that should give us a somewhat clearer picture. And there is some prior information about medical students and how much medical students forget over time following learning. So, medical students forget roughly a third of their basic science knowledge after about one year. And then roughly half by the end of the next year, this researcher's customers found [Unintelligible 34:29].
SPENCER: I'm a bit confused about that because it seems to me quite clear that when someone reads a nonfiction book, if you were to quiz them on it six months later, they would forget far more than 1/3 of the material. In fact, they would remember a really small percentage. Am I wrong about that?
ANDY: No, I think you're probably right, but it may not matter. So let's talk about these med students. So they're forgetting something like a third of their basic science knowledge after a year. But that's different from the situation you just proposed. Because these medical students use some of that knowledge throughout the year, some of their coursework will depend on the prior knowledge. And so implicitly, they're performing retrieval practice in the course of the year. Perhaps their patient care even depends on some of that knowledge. And this, I think, is part of a pretty good criticism of spaced repetition. The whole aspiration we've been talking about on this podcast so far is that, well, maybe it doesn't matter if you can't remember the details of Pascal Boyer's points from religion explained for your next cocktail party. Because for the details that really matter, they'll be naturally reinforced by your environments and the activities you perform authentically. You don't need this inauthentic memory review stuff — I don't fully agree with this argument, but I think it's partially true, and I think it's pretty interesting to take seriously.
SPENCER: I guess the way that I think about this is that there's a sweet spot of knowledge where these kinds of tools are most valuable. And the sweet spot is this would be useful in some way to you if you increase your memory, but you're not naturally going to remember it. And it's an interesting sweet spot because I think the vast majority of the things that we hear about, we won't remember. But also, it doesn't matter. Like they're not actually useful. Think about people who read the news. What percentage of that would they actually remember? It's probably really low, right? And maybe even with nonfiction books, it's not like if you remember every anecdote or story, most of it's actually not that interesting to remember. But if you read like a good nonfiction book, maybe there are 12 really interesting ideas in it. And maybe each of those ideas has multiple parts to it. So maybe there's — I don't know, I'm just making the numbers — maybe there's like 30 actual pieces of information you might want to remember. So not 300, but maybe 30. And then maybe you actually remember four of them. But it would actually benefit you to remember all 30. And so it's like trying to find those things that are not going to naturally come up automatically. So you're not going to be reminded of them because if you kept getting reminded of them, you would probably naturally remember them, so they're not going to come up and be refreshed immediately. But you actually would benefit from knowing them. That's sort of the trick, I think.
ANDY: Yeah, I think that's one class. And I can think of two more classes where there seems to be quite a strong argument for this stuff. One of them is the class where you need quite a lot of components to do anything — like quantum computing is kind of a famously difficult subject. And one potential reason why it's so difficult is that in order to understand any of the base layer ideas, you just need to internalize a lot of new terms and notations and concepts really quite quickly to say, "Okay, here's your first quantum circuit, let's talk about what it does." And so, if you don't have some kind of memory support, you're left juggling a couple of dozen things in your working memory in order to try to understand the first interesting idea, which you just can't do. And that's maybe part of what makes the topics difficult to learn. And that relates to, I think — the second class issue, which is like a toned-down version of that. Anyone who's tried to program has probably had the experience of writing their first program and constantly referring back to some reference for syntax details, ten times per line, referring to library reference for what's the name of this function, what's the order of its arguments, and so on, again, and again, and again. You kind of can't really get going. This is very unpleasant to get started. And the same is true in conversation in a foreign language. Like it's not pleasant to try to have a real conversation with a native speaker when you know five vocabulary words. So there's something about using these systems to get over the hump.
SPENCER: Yeah, those are excellent points.
[promo]
SPENCER: I want to dig in with you about what is worth remembering because I think that's just a rich, interesting, and important topic. And you think you've already started to hit on some examples, but maybe we could get a little more into it. And so one of the things I think about with this is that people will often say, "Oh, well, you don't need to remember something because you could just look it up. We're in the world of Google and Wikipedia. What's the point of memory?" And I think what that point of view gets right is that, yes indeed, there are lots and lots of things that you don't need to remember because you can look them up. But when it gets wrong, it underestimates the extent to which we need ideas to have certain thoughts. So I'm curious to hear your analysis of that.
ANDY: There are a few classes of things that are very interesting to try to memorize even when you can look them up. So one of them is these behavior change things we talked about earlier. I had that conversation with someone who pointed out this interesting thing I might be doing with employees. And I want to remember that key insight that she conveyed to me. I guess I could write that into my notes. And then maybe if I search for the right term, it would show up in my notes later. But that's not likely to change my behavior in the same way as being confronted with that prompt several times over time. Another angle here is that when creative insight happens, and you make a creative connection between two ideas that no one has noticed before, or you notice a contradiction that suggests that something interesting might be hiding, you can't make that creative connection or notice that contradiction unless you have those items available to you at that moment. And then finally, I think that some of the examples I was mentioning earlier about trying to learn a complex topic or learning programming, or these things were being forced to look up the material in the course of trying to do the activity, make the activity qualitatively different in a way that might be unenjoyable, so you won't do it.
SPENCER: Yeah, all really good examples. I think about this idea of fluidity of thought. Imagine that you're trying to do calculus. You have to look up the definition of derivative every time. It just seems impossible. You need to get the derivative down to the point where it's so fluid that you can just work with it. And it needs to be a building block you can then attach to other building blocks and think about in real-time. Not all ideas are like that, but some are. I also just wanted to point out that even with the memory thing, you need to know that there's something worth looking up. And that actually is not necessarily obvious in all cases. For example, think about the Pythagorean Theorem, a squared plus b squared equals c squared, that a lot of people are forced to memorize. You don't really need to remember the Pythagorean theorem. It's just not going to really be that useful. But it might be useful to know that this thing called the Pythagorean theorem that relates the sides of triangles to each other. And now that you know that, you might be in a situation where you're like thinking about a triangle, let's say you're planning something in your home, you're designing a cabinet or something, and you're like, "Oh, the Pythagorean theorem." And suddenly, now you have the right hook into information.
ANDY: Absolutely. This is a lighter form of memory. But I think many people do with nonfiction books, where they read it not to remember the details but to know, "Oh, there is a book that talks about this thing, and I will pull it down and look it up at this time." There's so much of what we're talking about that depends a great deal on the cost of memory. When people raise these hesitations, I think they're correct in the regime where memory is scarce, onerous, or expensive. But they're incorrect. In a regime, we're actually like this, choosing to remember something. It's almost free. And so, if we can move these systems along to get us closer and closer to that regime, then I think these questions become irrelevant.
SPENCER: Right. If memory could just be as simple as I choose to remember this, then suddenly, we might want to remember a lot more than we currently think we do.
ANDY: That's right. And this relates to another issue that I think is worth talking about. I think many people are hesitant with regards to these systems or these ideas because their experiences of memorization are grounded in these industrial schooling environments, wherein they were being forced to memorize things that they didn't necessarily find interesting or which were just structured by other people's agendas. And indeed, many of the tools on the market are kind of framed this way, where it's about other people's ideas. It's about what other people think is important. And this is a problem with Quantum Country, too. The Quantum Country helps you by giving the author's prompts, so you don't have to write them. But it doesn't give you any fluidity or flexibility with respect to those prompts for yourself. So something that I think is really important in the success of these systems — and which I'm slowly working to figure out how to do in my mnemonic medium systems — is that the memory practice should be about you and the ideas that you find most interesting, meaningful, exciting. It should be framed in a way it's compelling to you relative to your experiences and stories that existed in your life. And it shouldn't have any shred of a sense of duty, or should, schoolishness. It's really just a tool for personal enablement.
SPENCER: So we've been talking about spaced repetition, which is an example of applied cognitive science or applied psychology in this learning space. So let's take a step back and just think more broadly about this, like translational cognitive science or translational psychology. I'm curious to hear your thoughts on this broader topic of how you take results from these fields and then bring them to people in a way that benefits them because it is something that is in common between both of our work.
ANDY: A lot of my thinking here is borne out of writing from translational medicine, where this is a really essential practice. You have lab scientists understanding things about, say, molecular biology, and eventually, those insights make their way into drug developments and hopefully into clinical treatments for patients. And that's a very pipeline model. A much more interesting model of translational medicine adopted — that they call bench to bedside and back — where insights from the bedside can substantially influence the bench. Even in that model, it's kind of different people doing the two things with specialization. The clinical decision-makers are publishing a clinical report of their observations. This pharmacologist, maybe their molecular biologists, read this and then decided to run some lab experiments. I think an underrated space, which might have some unique ideas or discoveries to be made, is one where a single person is potentially capable of reading the literature and using it to construct a high fidelity system — one that wouldn't be tractable within the context of academia — using that to learn new cognitive science level insights. And ideally, publishing and disseminating that knowledge back within that community as well.
SPENCER: This is really related to an idea that I sometimes talk about, which I call full-stack social science — making a metaphor with full-stack software engineering, where full-stack software engineer (as I'm sure you're aware) works on everything through the whole pipeline of an app, from the backend databases all the way to the front-end user interface. And with full-stack social science, I think about doing the full-stack of things from coming up with hypotheses to designing and running studies, to then taking those studies, translating the results into something in the real world, maybe building a product or adding features to a product and then eventually pushing them out, actually getting them in front of users. It seems to me that there are many benefits to doing the full-stack. And unfortunately, without the full-stack, you can find there are a lot of gaps. Maybe you find some relevant academic studies, but there's some kind of bridge that needs to be made to actually put them in the real world. And with just the studies in the papers, you just can't quite apply them. And so unless you really need to do the study, you can't do it. Or maybe in developing a product, you come up with some really interesting hypotheses. But if you don't have the right thermal design or methodology, you may not be able to test it appropriately and see if it actually holds water.
ANDY: I would really love this metaphor. I think it's a place where your practice goes a step further than mine. I am interested in building real-world systems because I think they allow us to see things about the theory which can't be seen in toys or demo systems. But I don't currently plan to take those systems and make them products, companies, which can be sustainable, or stuff that might be necessary to scale them up in the way that they could be scaled. That's really just because I feel something about having my hands full, like I don't know how to direct, coordinate, or fundraise for that kind of work while also doing the kind of research work that needs to be done. Any attempts I've made that shade into that have felt overwhelming and unsuccessful. So I'm curious how you think about going in and dealing with that.
SPENCER: It seems like you're doing more of the steps than is traditionally thought of as being part of academia, and you're also doing it outside of academia. I think the entire spectrum would be all the way from hypothesizing through to selling a product or something like that. But you're still doing these kinds of significant chunks that are more than expected in these domains, which I think is really cool. And doing all this stuff in a sequence is not necessary. But I'll just say that I think it's really hard to do all the steps because it means you have to try to develop these quite unrelated expertises. Designing an experiment doesn't have much to do with building a product that people actually want to use. And analyzing data is yet another thing that's essential in your work and in my work that also has not very much to do with either of those other things. So it's really hard. And there are huge challenges. And it's stressful and difficult. And so, I completely relate to no need to go there. But I also think it can be really rewarding to try to bring it all the way from hypothesis to something that's put in front of people. And I really think you've done that. You have these essays out there that lots of people are reading and enjoying every day. So you haven't turned into a for-profit project, but you're still benefiting people who will delight in your work.
ANDY: Thanks. I appreciate that. I think you're right that one of the key barriers is the multi-valent skill set. Certainly, even developing the skill set to do what I can at a relatively modest level has been challenging and unusual. But I want to push on that a little more. Because I feel that even within the skill set that I have — say the part of the spectrum I do is something like reading the literature, forming hypotheses, articulating some kind of concept based on those hypotheses, designing a system that expresses that concept, implementing it, shipping it, analyzing the results, and then like documenting and disseminating them.
SPENCER: That's a ton of stuff.
ANDY: Yeah, and so one problem with that stuff is skills. But another problem with that stuff is just capacity. I feel unable to do all of those things at the level that I want to do them. For instance, when I am implementing the software, I find it very difficult to also think about the research-level problems. Part of that is time, but I think people actually overrate the time component. It's much more about mindsets. I used to get myself into this mindset where I bring the system into my head and implement it. And when I'm in that mindset, it's very difficult for me to think about conceptual research problems. And the same is true for many of those other skills. They all have their own hats that take time to put on and time to take off. I'm curious how you think about that.
SPENCER: Yeah, it's a really interesting point. Because when you get really sucked into one type of thinking, it can be hard to move in and out of that. I mean, a big part of it is just the team, right? Ideally, you have a team, and different team members are in different modes and thinking about different things. But you kind of cooperate across those boundaries. And I think that's a really good solution to this. My mind is pretty weird in that my preference is to think about lots of different things and move in and out from. I don't write much code these days, but maybe I'm doing some coding, and maybe I'm talking to someone about data analysis, maybe I'm dead working on user experience and bouncing between these things. Actually, I find it really wonderful that my preferred week involves all these different modes of thinking and not getting stuck in one.
ANDY: Right. And that you gathered a team as well to help you with this.
SPENCER: Yeah, that's absolutely essential. And I find it amazing how much you, too, yourself. I mean, it's pretty astounding that you're able to do the whole process.
ANDY: I'm curious to hear more about how you think about working with this team. Because, of course, financial limitations keep me from expanding as much as I might like. But also, there are some interesting conceptual limitations. I can't just articulate the hypothesis and concepts and then farm out the design to someone else because the process of doing the design work substantially influences the concept. The process of doing the actual technical implementation also does. And so, these things are all intertwined in a way that I don't quite know how to disentangle. It feels like they actually have to all be in my head. I'm curious how you deal with that.
SPENCER: A way that I deal with that is I try to be close to the design and other aspects, but close not as in doing the implementation myself, but rather, looking at it very carefully as it progresses and trying to also learn some of the insights that the person who's doing the object-level work is learning as they do it. But I think of it as part of my job: giving close critical feedback on what's being developed and considering, "Okay, here's the vision we're trying to get to, here's what was executed. How is this different from the vision? How do we need to nudge it back?" But I agree with you that actually doing the work yourself can give insight and ideas. So maybe, to some extent, there's a trade-off between efficiency and getting your hands dirty.
ANDY: That's right, and maybe even more important because it implies efficiency, scope, or scale. I don't necessarily care about how long it might take to do something. I'm potentially willing to let it take two or four times as long if it allows me to reach some kind of outcome that couldn't be reached without it being in one person's head. But then, at the same time, there are ideas that I simply can't explore because they're impractical with the amount of velocity I can generate. And so maybe those ideas, like you can't access whatever quality space requires everything being in one person's head for those ideas.
SPENCER: I think a lot of people have a different limitation, which is that it's just not feasible for them to learn that many different skills simultaneously.
ANDY: Sure.
SPENCER: You have to do front-end, back-end, data science, research, reading papers, and designing experiments, right? And most people are going to be a specialist in something. And then the other thing is, they have no incentive. They have no choice but to outsource a significant amount of it. So, while I think that there's this big benefit you get from doing all the things, that may not be feasible most of the time.
ANDY: There's another huge related trade-off, which is just that I am not an expert in all of those things as would potentially be a team of great people that could be assembled around them. And so you're kind of trading-off the benefit on the one hand of having everything in one person's head, which allows you to reach certain points in the possibility space that are challenging to reach otherwise. You're trading that against maybe deeper expertise in certain parts of the process that you could get if you dealt with people who made that their focus.
SPENCER: That's actually right, and I wonder if you could get some of that benefit by just having people with different expertise scrutinize it. I don't know to what extent you work that into your model.
ANDY: Yes, I tried to, but I think I need to do it more. And I think one place that could really help is involving not just scrutiny, but potentially even some active consulting. I did a first pass of, for instance, this data analysis — my data analysis skills are fairly rudimentary, they're probably like inspecting your grad student level or something — "can I bring you in, to bring this to the next level?" I did that for Orbit's art direction, and that was super helpful. I don't have a lot of budget for this kind of thing, but a few thousand dollars made a big difference in that place.
[promo]
SPENCER: I would describe you as an example of someone who's in what you might call para-academia — which Michael and I discussed in our podcast episode. So I'm curious, do you think of yourself that way?
ANDY: I do. I think I identify more with that than with the tech industry or something like that.
SPENCER: How would you describe para-academia for those who didn't hear that episode?
ANDY: Sure. I would describe it in terms of what I'm trying to do. I am mostly interested in producing ideas, rather than producing products, organizations, or things like that. So my primary output is figuring out, "Oh maybe a new kind of memory system can be constructed by interleaving these spaced repetition prompts with narrative prose in this way, it has these properties," rather than a specific implementation system. So that's the academic sense. The para sense comes from the fact that I'm not associated with an academic institution. I don't publish papers in refereed journals. And so I'm kind of a weird outsider in these ways.
SPENCER: And why not hook yourself in academia in some way?
ANDY: I am hooked in some ways. I talk to academics in these areas pretty regularly. We've even done some collaborations. And certainly, I read a lot of the output of academics as well as industry people. There are two main reasons. The first reason is something that we've discussed a fair amount or at least alluded to, namely, that I'm trying to pursue the point in possibility space that involves taking these theories and building really fairly high fidelity systems that express these theories in order to try to understand them better. And that is an activity that is very difficult to do in an academic setting. It is discouraged by academic incentive systems.
SPENCER: I have this intuition that if you were doing this in an academic setting, there'd be a lot of pressure to build a much crappier version of what you built if that makes sense. Because it would be really a research tool, not something that you expect real humans to use outside of the research context. Do you agree with that?
ANDY: Yes, that's correct. And in many cases, it's the right call. It's the right way to answer the question they're trying to answer. I'm looking at a different set of questions, and I think for that set of questions, it really does require this kind of high-fidelity implementation to understand.
SPENCER: Right, and high fidelity also involves here people using it because they find it cool, not because they're being paid to participate in the experiment.
ANDY: Exactly. And so the second class of reasons really has to do with the specifics of my field. And so if I were in a different field, I might be somewhat more inclined to try to make it work. And that just really comes from the fact that the field kind of most associated with my work is a field called human-computer interaction. And, I don't enjoy the dominant cultural values and practices in that field. I think they would hinder my work. It's a field that's very interested in trying to quantify and make empirical or scientific what I see as essentially a design science. And the incentives and discussions in the field really reflect that.
SPENCER: What do you think is the problem with trying to quantify or make that scientific?
ANDY: In some cases, it can be great. Some classic discoveries of the field are, for instance, having a menu bar at the top of the screen is a great place to put the menu bar because if you ask people to try to, say, target a button that you placed at the top of the screen, they can move their cursor up and then keep moving. So they have to be very precise about how far they have to move it up. This means that they can more easily rapidly and accurately target a button that's placed at the top of the screen.
SPENCER: Oh, cuz it hits the top of the screen, you mean, and that stops the cursor.
ANDY: That's right. So that's an insight that came from this field decades ago. And that's a great insight. And it's an example of a quantitative insight that's really quite valuable. But I think that it is much less valuable in applying these kinds of methods to like a new way for people to collaborate on doing a literature review. I've developed a system wherein people collectively synthesize using this new set of verbs. And so if that kind of paper were to appear in a human-computer interaction journal, there would basically have to be some kind of study associated with it. Or, maybe they recruited some grad students, and they asked some of them to do a literature review. And then they came up with a bunch of measures like how long it took them to do the literature review. And maybe the number of papers considered rejected cited, maybe they get some outside referees to decide on the appropriateness of the papers included or not included. And all this will be fine. It's kind of interesting to know this stuff, maybe. But for the most part, I think it just misses the point, which is, does it substantially enable participants, like these really, really weak proxies constructed in a very inauthentic setting. Those two pieces together create quite a challenging setting.
SPENCER: I see. So a lot of times the things that we most care about, are the real, true underlying purpose or systems that are just hard to quantify. And so you end up with these not very relevant proxy measures, something like that.
ANDY: That's right. The nature of design science is something where it's not descriptive. You're not trying to understand a particular physical phenomenon and characterize its dynamics. But rather, you're trying to create a new space of phenomena, and say, "What is possible?" "Okay, here, I've created a new space of possibility. Let's try to understand its curves, its form, and some of the ways that we can understand that, yes, is perhaps empirical, perhaps quantitative." But often the most interesting questions are, "What does this enable that was not previously enabled?" So bringing it back to our earlier discussion, "What would be true in a world where memory was trivial and automatic?" That's a way more interesting question than, "How efficient is the memory system?"
SPENCER: I see. So pushing people towards the sort of measurable outcomes may in some cases be not the most interesting or important. As a mathematician, I love quantifying things. I'm sort of a data person, not just pure math. So I love these kinds of things. But I also feel that it's just one set of tools, and the qualitative stuff can be as important, or in some cases, more important. Even just in question design, if you're doing a survey, I find combining the sort of quantitative side, "What percentage of people said this?" on the qualitative side, "Well, what do people mean when they say this?" is absolutely essential, and they both can build on each other, instead of doing one at the expense of the other. It actually seems really risky, because it might just be the wrong tool for the job.
ANDY: Right. And don't get me wrong, there are human-computer interaction papers that do this, and which don't have studies at all. So it's more that the dominant culture is one that is trying to be a descriptive science rather than a design science.
SPENCER: So do you think more people should be doing para-academia? And do you think, is that good for society? What are your thoughts on that?
ANDY: I think the binary question is pretty easy to answer. Yes. So maybe more interesting is a continuous question, like, "How many more people should be?" Or like, "If we had to move some lever from academia, what's the carrying capacity in a purely academic setting?" "Should there be tens of thousands of such people? Should there be hundreds of thousands of such people?" We can talk about the funding constraints, and I think right now, it would be very difficult for tens or hundreds of thousands of people to make that work. But I'd be curious to hear your thoughts on if we could snap our fingers and remove those funding constraints, how many projects or fields of investigation are there which would likely better be pursued in that setting?
SPENCER: I think you can make an analogy to the free market where you've got your free market capitalism, and that has certain problems that it creates like pollution. And then in those gaps, regulators can come in, and they can solve those problems (at least in the ideal world, they can fill these gaps in the sort of where capitalism falls on its face and causes all these issues). And one way to view para-academia is you can say, "Let's look at the academic system." And let's say, "Where are the gaps in that?" "What are the things that academia is not doing as well as it should be doing?" And then, "Could it be patched with independent researchers that are pursuing these things free from the constraints?" And also in some cases where incentive systems may cause academia to miss important things.
SPENCER: Right. So there seem to be two general classes of generators. First, these kinds of translational things — often, it's fairly difficult to get commercial funding and to have commercial organization setups, which are reasonable for translational, or even better, that kind of feedback loop oriented work that we were discussing earlier. And then, the second generator that seems to be valuable is kind of totally out there potentially paradigm-shifting work, where basically you're doing work that amounts to trying to define a new field, that tends to be very difficult, at least in certain academic settings.
SPENCER: I think I'm very sympathetic to the idea that we want people trying radical things that will probably fail, as long as they're doing it in a way that is rigorous (if that makes sense). There are plenty of cranks, right? People that don't understand physics that are trying to do physics, but they really don't understand it. And then you have a very small number of people that really do understand physics quite well. And they're really working on radical ideas. And from my point of view, we should have hundreds or thousands of those, and most of those will not pan out. But that's where we'll get the really giant wins. Because every once in a while one of those works, you just radically improve the field. And I think I have a concern that in academia, maybe it can be harder for that stuff to flourish. Maybe there can be a little bit of a winner-take-all dynamic, where one approach to something gets a lot of the funding, and then there's not that much funding for the more radical approaches. And then some approaches that might just seem too weird, maybe nobody wants to study them because it's just not that cool. Whereas if you are sort of free from the system, maybe you would pursue it. So any reaction to that?
ANDY: That seems right. And I guess, again, it seems to come back too often the motivation being about escaping incentives, so that maybe you can't get funding to do weird investigation in the academic setting, either because it's difficult to get past referees and journals that'll be necessary to establish your credibility of the results in this thing, or because the funding agencies won't help you out. And so, para-academia, I guess, for many people is a way of stepping out of that, seeking alternative funding sources (which may or may not be available). This is totally separate from feeling crowded out intellectually. I don't know how many people feel that, but it may just be very difficult to develop some very unusual new idea if all of the discussion around you is framing the problems in some fairly well-understood way. And no one's really interested in talking with you about your new framing.
SPENCER: Yeah, I think that's really interesting to consider. I've always been interested in physics, but I've never pursued it as a career. I don't know too much about it, per se. But it strikes me that, if you want to make progress in physics, you want people who are really willing to reconsider assumptions that nobody else is reconsidering because it's sort of decided to be true 20 years ago, and nobody challenges it anymore. And probably, there was some assumption that was wrong somewhere along the process, but this stuff may be really hard to reconsider from within a paradigm where everyone just takes it as a given.
ANDY: Yeah. So are you familiar with Garrett Lisi?
SPENCER: Yeah. I'm thinking Garrett Lisi has a new theory of physics he's pursuing. Eric Weinstein has a theory of physics he's pursuing. Stephen Wolfram has a new theory of physics he's pursuing. I have no ability to evaluate these ideas, but I would like to see a hundred of these blooming, and each of them, I think, is inherently a longshot. They're all trying to say, "We're going to create a new theory of physics that is gonna try to unify physics, or do something that nobody's ever done." And so that's a huge longshot. But that's how startups work. You have a huge bunch of long shots, and occasionally you get to Google, right?
ANDY: Exactly. I think it has to be okay that many of these things will go nowhere. Often that's difficult in the contemporary academic setting. But it has made it possible for someone who can go off to Hawaii, and just kind of hide and think about abstract geometry all day. It could really lead to something interesting. It's something that's special about each of these people — maybe not Wolfrum — is that there really is just one person that you need to think about this thing.
SPENCER: The budget for funding someone to just think about physics is pretty low, right?
ANDY: Exactly. Now, we should talk about some of the downsides, too. Non-association with an institution has its problems. It certainly promotes crankery. But I think it also, probably to some degree, harms the dynamism of para-academic work.
SPENCER: I think a lot of people believe that it's very hard to do good work without others to bounce your ideas off. And so that's maybe a serious downside of not being in an academic system like, if you're part of a department, you can just walk around the corner and talk to world experts and debate ideas, and hear what they're thinking about, and get inspired by their lectures. That's probably pretty tough to reconstruct. You can do it to some extent. You can just befriend a bunch of brilliant people, but it certainly takes a lot more work out of the system.
ANDY: Yes, they all have the kind of pure academic types that I'm aware of. They've had to construct something like that for themselves. And more or less successfully, I would say that the kind of intellectual exchange in my life is not as high quality as it would be in a good academic department. And that seems like a huge risk. It seems really bad. It's something I'm worried about, and that I invest a fair amount of time thinking about.
SPENCER: I have a weekly phone call with a brilliant friend of mine, where we discuss psychology, and each time we pick a topic. And so one week we'll do identity, in another week we'll talk about anxiety. And I find it super fruitful to have those kinds of relationships.
ANDY: Absolutely. I'm so grateful for these people. Another important problem seems to be our archival and dissemination. So for instance, at least Wolfram and I — and I'm not sure about Weinstein, etc — but I think, you too, we mostly publish in non-refereed, non-academic settings.
SPENCER: The trade-offs are really amazing to me because I published academic papers. I never pursued it as a primary goal, but I published quite a handful of them. And the thing about it is — I think to myself, "Should I go try to write this up for a journal, which will be a long, tedious write-up process? I have to learn all the kinds of specifications of this particular journal. I probably have to do a bunch of research even to pick the journal in the first place. It's gonna take a lot of work, then I'm going to submit it. I'm not gonna hear back for a really long time. It's probably gonna get rejected because most papers get rejected. If you're applying to a good journal, then I'm gonna have to reformat it and rework it a whole bunch to go submit to another journal. Two years later, maybe it's gonna get published. Okay, that's Option A. Option B, we do a write-up for the Clearer Thinking blog, and we send it out via email, like a week later, to 200,000 people. We release our code and data. So if someone else wants to build on it, they could just go do that. They can just look at exactly what we collected, and just reanalyze it or go do their own version using our code". The trade-off is just insane. I mean, the benefit to cost between the two approaches, it's just mind-boggling to me.
ANDY: That's right. But the main thing that I worry about enjoying all those benefits myself is archival and connection to this broader system. So my citations are not picked up in the same way that I work usually, although I'm sort of slowly figuring this out. It doesn't appear in Google Scholar. I am uncertain that it will stick around correctly, and as long as it should, the same archival stewardship is not there, and so on, and so forth.
SPENCER: We just turn these into PDFs and then put them in that archive. Is that doable?
ANDY: Yeah, that's one approach. Although part of the reason why I self-publish is that often there's interactive stuff in there.
SPENCER: Yeah, it's really hard to save interactive stuff in the future.
ANDY: So one of the interesting challenges for para-academia is also just where does the money come from? One really common answer seems to be that people are independently wealthy. And it actually seems kind of okay. A lot of historic science did happen that way, gentlemen of leaders so to speak. But something I've been experimenting with that's been pretty interesting is crowdfunding.
SPENCER: Yeah. So have you done crowdfunding for some of your projects?
ANDY: Yeah. The primary funding for my work comes from a Patreon. And it supports me and pays my expenses. It's not by any means lucrative. But it's interesting to compare the scale to a typical academic setting. So if I were a junior faculty in the sciences, at a typical research university, the starter grant I would be pursuing is called the career grant from the NSF. And it provides relatively modest funding for about five years. And it's the most common grant given in these settings. And so that the crowdfunding model now generates about two-thirds of one of those grants. So we're not all the way there. But it's something where it's starting to seem it can be comparable to this standard source of funding for new academics.
SPENCER: That's really interesting. And I know, for example, Wern, who does fascinating hand-made meta-analyses and interesting topics for his blog. He basically is, I think as far as I understand it, fully funded by his readers to just spend his time researching, writing, and producing interesting output.
ANDY: That's right, although history actually concerns me a little bit. So you can go and look at his picture on numbers, and they aren't nearly as appealing as mine, which I think is wrong because I think his work is much deeper and much more interesting. And he's also been at it much longer. And I think the difference is probably explained by two things. One is that he doesn't really talk very much about the fact that he has this model. And that's kind of unfortunate. Every time I do talk about it, I feel a little bit like I'm shilling. It feels like it distorts or poisons the work. And then the second is that I think his work is less legible than mine, not because he's like a worse writer, he's a great writer. But rather, it's just weirder. It's further out there.
SPENCER: Maybe a 20,000-word analysis of a topic is not everyone's cup of tea.
ANDY: That's right, especially if it's a very obscure topic. And so one thing I'm concerned about is that crowdfunding may only be possible for comparatively more boring topics.
SPENCER: But I don't know about that. I feel the stuff you're working on is actually pretty cutting edge. And I think it's a really good sign that people are willing to fund your work. I mean, what do you think the motivation is, why do you think people fund your work?
ANDY: I've done a survey on this. I did one in December, so it's relatively recent., and about a third of people responded. So I don't know how representative this is. But of those people, the primary motivation seemed to be causing marginal production to happen. I just gave them an open-ended response; "Why do you support me?" And the most common response was something along the lines of, "I want more of this kind of stuff to exist." So implied in that is, in theory, by giving money, they cause more of it to happen.
SPENCER: And that's probably true, right? I mean, it does for you from having to work in other ways to make ends meet.
ANDY: Yeah, that's right. It's mostly true. I mean, there's some interesting aspect to this, cause at this point my expenses are paid. And so what exactly happens with the marginal money? Well, at the moment, my expenses are paid, but I can't really say it, and that makes me nervous. And so, if I'm feeling nervous, then maybe I will only do this for a couple of years because it doesn't feel long-term sustainable. So we're kind of getting over that at the moment. And then the next thing is the staff. So now it's about accelerating the work.
ANDY: Right. Well, I certainly could see using more money to have an assistant, or have a programmer to work with or — that feels like very high leverage.
ANDY: Right. It'd be great. In fact, a few donors are helping me make that possible right now. And so listeners, if that's something that you might be interested in, you can reach out.
SPENCER: Yeah, where's the best place for people to find your work and learn more about what you're doing now?
ANDY: My website andymatuschak.org is a good start. I'm mostly fairly active on Twitter.
SPENCER: I definitely recommend following Andy on Twitter. And if you can't spell his last name, don't worry. We'll have the link in the show notes so you can find his works.
ANDY: Thanks, Spencer,
SPENCER: Andy, thanks so much for coming on. This was really fun.
ANDY: Thank you, Spencer. I had a really great time.
[outro]
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: