January 27, 2022
How is the economy like a differential equation? Can the economy grow indefinitely? Are there economic attractor states? Or are economic outcomes chaotic and/or extremely sensitive to certain variables? What should we know about progress in genetic engineering? Can you (and should you) do genetic engineering in your garage? What are some common mistakes people make when thinking about AI? Should we expect AI abilities to converge in some domains and diverge in others? Why do we sometimes collectively forget important ideas? Have we as a species grown wiser over the course of our history? How can we form high-trust communities on the internet? In the context of social media, is ease of access at cross-purposes with membership screening and/or costs, or is it possible to have both? What should we make of ephemeral communities that appear briefly, do something huge, and then disappear (like the WallStreetBets subreddit phenomenon)? What are the various types of misinformation being used in the US, Russia, China, and elsewhere?
Alyssa Vance is an engineer of AI systems, a futurist, and an entrepreneur. She is currently serving as an independent consultant for a variety of organizations interested in AI. She was previously the first employee at Apprente, which developed conversational AI for the McDonald's drive-thru and was acquired by McDonald's in 2019. She was a founder of CandleCRM, MetaMed, and GetBitcoin, and served as Executive Director of the World Transhumanist Association. She also hosts the Long Term World Improvement mailing list and other groups for discussing future technology. Alyssa has recently joined Twitter at @alyssamvance and can be reached via email at firstname.lastname@example.org.
JOSH: Hello, and welcome to Clear Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast. And I'm so glad you've joined us today. In this episode, Spencer speaks with Alyssa Vance about differential equations and complexity, genetic engineering, AI systems, and creating high-trust online communities.
SPENCER: Alyssa welcome.
ALYSSA: Hi, Spencer.
SPENCER: It's great to have you on.
ALYSSA: Thank you. Thanks for inviting me.
SPENCER: So I wanted to ask you first, how is the economy like a differential equation?
ALYSSA: Yeah, so a differential equation is just a mathematical model where you have some kind of system, and there's a relationship between how the system currently is, and then the way that system is changing, or sort of, you know, the rate at which it's changing.
SPENCER: Right. So like you've got, on one side equation, you might have things like the total GDP. And then the other side of claims, you may have things like the amount of change in the GDP, or the derivative of the GDP or things like that, except, obviously, with many more variables, and just one like GDP.
ALYSSA: Right. One example is, humans. So if you think about humans, you have some types of humans, like babies, or young children, they're very small budget, they're growing, they're growing quickly, you can't have a baby that just doesn't grow and just just stays a baby forever, you know, that doesn't work. And then you also have other types of humans who are like adults who are much larger, who might be five feet or six feet tall. But you can't have an adult who was like, you know, who's growing really fast, who is growing like an inch every two months? And because, you know, that doesn't work either. You know, that's not how human biology works.
SPENCER: So is the idea there that if you were to write the differential equation, describing the development of a human's height, the height would be a function of the derivative of height? Or in other words, your current height is related to how quickly you're growing?
ALYSSA: Right. Exactly. So there are some types of you know, states that you can be in, where you know, you're a baby, and you're small, and you're growing quickly. But other types of states that you can't be just a baby who remains a baby forever, because you know, that that sort of thing doesn't make sense.
SPENCER: Got it. And so how do we connect this to the economy? And what do we learn, think about the economy this way?
ALYSSA: Right. So there are a lot of things in sort of how the economy currently is. Sort of, you know, how happy people are, how secure people's retirements are, how many people have jobs, what kinds of jobs people have, you know, what people's incomes are young people's unemployment rate, and so on, that are tied in particular, to the rate of economic growth. So there's a lot of things that sort of depend on an assumption of, you know, some, you know, 2% economic growth, or 3% economic growth.
SPENCER: And economic growth here is a derivative of basically the economic output, right?
ALYSSA: Yeah, so economic growth is the change in the size of the economy. So a lot of things about, you know, sort of the modern economy, and you know, modern society as a whole can really only exist in the presence of economic growth. But then if you have economic growth for a long time, you know, if you have 3% economic growth this year, and next year, and the year afterward, in the year afterward, in the year afterward, eventually, you wind up at a really different place. So 3% economic growth is sort of, you know, doubling the size of the entire economy, you know, once every something like 20 or 25 years. Well, if you double every 20 years, you know, for after a century or two centuries, essentially you wind up at a really different place than you started with.
SPENCER: Got it. So is the concern more that many of these different systems kind of are contingent on this economic growth continuing? Or is it more about this level of economic growth, assuming it really does happen actually changes society much more radically than people realize?
ALYSSA: It's sort of like, there are several different options that you have, that's the sort of like no option that doesn't involve some amount of change, or some amount of instability. So if you think about it, if you go back to the 1950s, and sort of visualizing like my grandfather, you know, coming back from the war, and coming back to the booming, American economy of the post-war period, he didn't realize it, but he sort of faces a trilemma as I imagined it, because of, this equation that sort of governs the society. So the first option is that, okay, we have to see a 1950s economy, it's doing really well, everything is going really great. There are all these factories, and you know, the roads are getting built, and you know, there's more power plants and more electricity, and so on and so on. And we could just keep doing that we could just, you know, keep doing what we're doing, you know, building more roads, more cars, you know, everyone drives more places, and we have more strip malls and just more everything, and we have great growth forever. But if you do that, what happens is that, well, all that stuff emits CO2, so among other problems, but I'll just pick that one as an example. So if you keep doing that, eventually you get way, way too much CO2 in the air, and then eventually you get climate change, and eventually, people can't breathe.
SPENCER: So this could be another term in the differential equation of society is like how much CO2 is in the air. And maybe for low levels, it like has almost no impact, but it's some level high enough to kind of cause instability in the equations.
ALYSSA: Yeah, eventually, if you keep changing something, eventually you get to something that you didn't expect, or like, you know, something that causes some sort of problem or some sort of instability.
SPENCER: Got it. Okay, so that's the first option in trilemma. What's the second one?
ALYSSA: The second one is the one that I think a lot of people are hoping for now, which is that, okay, well, we can keep growing. But we can switch to at that time, nuclear power, you know, now people are thinking more like solar power, wind power, where we can switch to a non-carbon energy source, where we keep producing more electricity, but we don't produce any CO2. And on the one hand, you know, that solves that problem. But on the other hand, tool, if we replace the entire fossil fuel industry, well, they're 18 countries last I checked, where the majority of their exports, our oil, gas, and coal are our fossil fuels. So if none of those do have any market value, because we killed the entire fossil fuel industry, then none of those countries are going to have anything to export. And so all of their economies are going to go bankrupt. And all the fossil fuel companies are going to go bankrupt. And everyone who works in the fossil fuel industry, and everyone who works in a town that's supported by the fossil fuel industry is going to have to move or is going to have to find a different job. And that's going to cause problems.
SPENCER: I see. So in attempting to solve problem A, you essentially create other problems, not necessarily as bad as the problem, but their problems on the last.
ALYSSA: Right, exactly.
SPENCER: Got it. And what's the third option in the trilemma?
ALYSSA: Right. And the third option is one that I think a lot of people advocated for during the 60s and 70s, which is sort of like the degrowth option where you know, you just sort of stop all of the interstate highway construction, you stop all the powerplant construction, you stop giving more and more cars to everyone, you stop driving farther and farther, you sort of stop the economic growth processes that we had. But because of the way the differential equation works, if you change the right hand of the equation, you said economic growth to zero, you also change the left hand, if the equation where equals, you know, retirement funds, for example, assume a certain rate of economic growth, the stock market assumes a certain rate of economic growth, the political system, the Congressional Budget Office projections, assume a certain rate of economic growth, and so on, and so on. So I don't know exactly what would happen, but I think if you suddenly switched the entire economy to zero growth, I think it would be something like, you know, just the entire country goes bankrupt, like, it would be really bad.
SPENCER: So I think what you're saying is that baked into a number of our policies and institutions is just this fundamental assumption of economic growth. And without it, a lot of these policies and decisions just don't make sense anymore.
ALYSSA: Right, like, entrepreneurship, for example, is an awesome institution, in my opinion. And currently, it's an extremely good institution for our society, on average, certainly. The idea is that you can start a business and you can create value for yourself and your customers, and then everyone is better off the world is better off than it was before because of what you did. But then, if you do that, you know, now economic growth is sort of like part of that – the economy plus your Business is now larger than the economy without your business.
SPENCER: Right, that makes sense. And you use this example of climate change. But I think you're making a much more general point. So do you want to comment on that?
ALYSSA: Yeah, so CO2, I think is just a salient example. It's one that a lot of people are worried about, but it's really sort of just one instance of a bigger thing, where if you sort of keep going with growing something forever, eventually, you're usually going to run into some type of unintended consequence, like we in the last century have been extremely productive with agriculture, we now produce enough food to feed the whole world, which is an astounding achievement. But now that we have the food industry, which makes money by making more and more food, and by making tastier and tastier food, now we see that obesity rates are going out there. And, in fact, I was talking with a friend earlier about how VCU is actually canceling out a lot of the innovations that we've done in the pharmaceutical space, because of all we've cured this form of cancer, and that increases life expectancy, but to obesity decreases life expectancy.
SPENCER: Right, so I guess, going back to the trilemma that you posed, and kind of trying to frame it in the even more general context, the first option is that we just keep growing at the same rate, but then that seems like in a number of ways, it creates really, potentially radical effects, on the environment, on the human diet, etc, etc. And you could probably list many such effects.
ALYSSA: Right, and just in the computer industry, for a long time. Yeah, that was sort of, as Peter Thiel talks about safe, you know, not subject to government regulation, because you know, it was seen as being lower risk. But now you know that we have you know, superfast computers, you know, people are legitimately worried about malevolent AI and they're worried about people getting addicted to social media, and they're worried about you know, the effects of this and democracy and so on. And these are all like legitimate concerns. If you go too far in any given direction, eventually you wind up with some type of problem that you have to deal with somehow.
SPENCER: Right. And then the second part of the trilemma is basically it's like, okay, well, then what we could do is we could try to, like, fix all the problems we're creating, right? And I think the point there is just that sometimes the solution to those problems creates other problems, or maybe hard to foresee, like, how do you actually solve social media addiction? You know, it's actually really unclear what you do. And then maybe there are unforeseen consequences in trying to solve that, is that right?
ALYSSA: Yeah, that's right. And what we see with sort of the old system, you know, what we had with, you know, broadcast television was that? Well, you know, they sort of did a lot of things subtly to keep extremists off. But he had, you know, really radical political views, just TVs just sort of like wouldn't book you. But nowadays, you know, on sort of Facebook, the default is sort of anyone can join. And then yeah, sometimes people you know, can attract huge audiences by being firebrands. But then, you know, the Facebook sort of semi arbitrarily decides, oh, we sort of don't like this person, then after you've attracted a huge audience, then they'd like try to ban you afterward. And like that creates its own mess.
SPENCER: Right. And then they end up being an arbiter of like, what is truth? And who's right. And you know, if someone says that you should wear masks, or you shouldn't wear a mask for COVID, they have to make a decision about well, should one of those groups be banned? And which one right?
ALYSSA: Yeah, that's right.
SPENCER: And then the third part of the trilemma is we could just stop growing. But then, of course, you know, as you said, so many things depend on this assumption of growth. So yeah, do you have a thought about how do we approach this problem on a meta-level?
ALYSSA: Yeah, these are problems. I think the problems are solvable, I don't think we're doomed. Eventually, we will, for example, develop renewable energy, eventually, we will have things like semaglutide that can solve obesity and so on. The biggest issue, in my opinion, is just that this model of the world is sort of like not what people were told. And it's sort of like, in many ways, the opposite of what people were told, and like, a lot of people don't want to believe it.
SPENCER: So what do you feel people were told? And how does it differ from what you think is true?
ALYSSA: Yeah, great question. It's like going back to, you know, my grandfather in the 1950s. We can actually go back and look at what people were taught back then. There's a video on YouTube called “The Strange World of 1950s Propaganda”. And these are videos that taught people, sort of as these instructional films, here's how society works. You know, here's how you get hired, you know, here's how you go on a date. Here's how you make friends and stuff like this. They don't say this explicitly. But there's this sort of this running theme that, you know, society is sort of like it's fixed, this is how society is. Then we should just, you know, assume that this is just sort of like the way things are.
SPENCER: By the way, now, you're making me incredibly curious about what “A Strange World of 2021 Propaganda” would look like in 50 years?
ALYSSA: That is a really great question.
SPENCER: Do you think it's the same kind of idea that are we being taught that the world is fixed in a way that's not?
ALYSSA: Great question. I think sort of like the emphasis has shifted that. I think that this sort of idea from the 50s is still sort of like lurking around and a lot of people still kind of belief in it. I don't think people are actively pushing it as much. And sort of like, the main messages of today are more like red versus blue propaganda that sort of emphasizes the different things.
SPENCER: Right, that makes sense. I also think maybe there's a form of propaganda around like, we're living in the technology age, and everything is like developing so rapidly, whereas maybe the reality is that some things are developing much slower than you would like and that people admit, and other things are developing much faster than people are aware of, to actually in this really weird fusion between people like way overhyping tech, like advancement technology, and we're under hyping other parts. And yet, maybe the narrative is that everything is going so quickly, or something like that.
ALYSSA: People can have a lot of beliefs that are contradictory without realizing it. I think like one of those contradictory beliefs in some ways, that was sort of like created by this differential equation model is the idea that you can have like economic growth, and also rapid technological progress. And also like, society will stay the same, and just nothing will ever change very much in terms of like me into my personal life. Most people I think, when they're thinking about if they imagined, their old age, if they imagined retirement – I'm 29 right now – you know, what will life be like, you know, when I'm 69? I'm thinking about retiring. They imagined retirement with 2020s technology or with you know, whatever technology is in their current year. They don't imagine retirement with the politics and the technology of like, 40 years from now, two different models that are disconnected from each other.
SPENCER: Yeah, I totally agree. Although, I'm sympathetic to doing that. Because while we know the world will be radically different like it can be actually surprisingly hard to like how it will be different and so very hard to have like a concrete mental model of like, what to expect.
ALYSSA: On the one hand, yeah, on the other hand, it's really important that we try to figure that out. Shameless plug, I've set up a mailing list, sort of with the goal of trying to figure that out. We certainly haven't figured it out yet. But I think we've made a lot of useful progress.
SPENCER: How do people find it?
ALYSSA: Yeah, it's called Long Term World Improvement.
SPENCER: I commend you for trying to figure this stuff out because it does seem just incredibly important. Going back to the differential equation model of society. I think one thing that seems to be really beneficial about thinking in that way is if you've ever worked with differential equations, you realize how difficult to predict they can be, and how they can have both like kind of attractor states where from a wide range of different starting points, you kind of get sucked into these attractors. And then they can also have chaotic behavior, were from like slight changes in initial conditions, you can actually have a very different behavior that can be very chaotic and technical sense. So yeah, I think this is useful to think about with regard to society as a whole. Are there attractor states where basically, like, unless we push really far away from something, we're gonna end up getting stuck towards one sort of society? On the other hand, is it really like a chaotic system, where there's like a lot of different places we could end up and it's like, almost impossible to predict because it turns out, it depends on like, slight differences in 30 different variables could lead us to very different worlds.
ALYSSA: I think we really see both, depending on what you're looking at, and to like, what counterfactuals you're using, and certainly, if you look at history, I think there's strong evidence for both at different points in time. Looking at, you know, 20th-century European history, for example, if you look at World War One, there are some cases where there was, you know, specific events, in the Balkans to 1914, no one could have predicted, that this one guy would have been shot on this particular day, but then on the Usos, sort of, you know, the general development of, you know, okay, well, we have this type of military technology, and that encourages this type of army, which encourages this type of fighting, which encourages this type of alliance system, and so on. All of that had been in development for decades. And a lot of that, you know, that was sort of predicted in advance, in particular, this by this guy named John block, who wrote this book called The Impossible war. So I think, a war of the type of World War One, I think, sort of an attractor state that would have required, a lot of effort to veer away from.
SPENCER: It makes me wonder whether we can predict a lot more about the coming wars in the future because now we know that like drones plus quadcopters, plus hacking attacks, it seemed like they're gonna play an increasing role in the war. And yet we've like, actually had very few examples of that so far. But as they began to happen, they seem like they might dramatically change warfare.
ALYSSA: Yeah, I'm actually wondering about that, because that's certainly true. But there's a historian who writes a really awesome blog called Brett Devereaux, you can check it out at a acoup.blog, that's a-c-o-u-p dot blog. And one of the things he writes about is the different systems of warfare. There's the first system, you know, going back to prehistoric times, if like, on your sort of rating and ambush hang. There's the second system of big armies lining up in battle formation. And there's a sort of a third or modern system, and the systems are so so radically different from each other, that oftentimes they sort of don't even understand each other. And I'm sort of wondering if, you know, sort of like the modern war might even you know, be moving into like a fourth system, beyond, you know, what most people would even sort of, like, recognized as a war. Like, if you have narrow AI, that's sophisticated enough to go into a country and sort of edge certain people into doing what you want them to do.
SPENCER: Through social media or hacking or what kind of thing?
ALYSSA: Yeah, so you know, as to make that country do whatever you want, then why even fight what's called a conventional or kinetic war, you can achieve all of your objectives anyway.
SPENCER: Yeah, it's really to think about. Like, the most effective attack is one way or the other side doesn't even realize that they're being attacked or something like that.
ALYSSA: Yeah, exactly.
SPENCER: So we talking a lot about the advancement of technology. One particular technology, I'm interested to hear your thoughts on is genetic engineering and how that's been developing. Could you tell us a little bit about what is genetic engineering? And then like, what should we know about how it's been progressing?
ALYSSA: So the basic idea is that in biology, we have this idea called the Central Dogma. And so an organism has DNA. And you know, the DNA is replicated and passed down from generation to generation, and the DNA makes RNA and then the RNA makes proteins. The proteins do all the stuff of the organism does. Like we have proteins, we in our brains, that do our thinking, we have proteins and our muscles that let us run. We have proteins in our stomachs that digest our food for us to and so on. But sometimes, things go wrong, or things don't work correctly. Sometimes, you want to add capabilities to an organism that weren't there before the assumed easy example of a sort of glowing bacteria. You want to make bacteria sort of glow in the dark, and in this case, you want to, you know, sort of modify the organism to work in a different way. And we do this by sort of changing the DNA of the organism you can do is you can, you know, take that DNA sequence, and you can then put it into what's called a plasmid, just sort of a circular loop of DNA that contains the DNA sequence for that protein, and also a bunch of other stuff to help that DNA get expressed. Also to like, make sure that you've inserted it correctly, and also to, you know, select the cells that have picked it up, and so on.
SPENCER: So genetic engineering here refers to basically swapping some genes for other genes.
ALYSSA: So that in this case, it's inserting genes, the organism, you know, doesn't have this particular gene and you want to, you know, insert it into the genome.
SPENCER: Got it. So what does the Central Dogma refer to here?
ALYSSA: Yeah, and so the Central Dogma is that you go from DNA to RNA to protein, and you can't go backwards. You can't go from protein to RNA, or from RNA to DNA, in cellular organisms, at least. Viruses can go from RNA to DNA, as we found out, too, with COVID.
SPENCER: So are you saying that we can somehow subvert the Central Dogma? Having trouble putting the Central Dogma idea together with the genetic engineering piece. Maybe you just sit back for a second.
ALYSSA: This is, you know, one of the biggest discoveries of biology of the 20th century. And that really set the foundation for everything that we do today. It's that in a human or a dog, or a cat, or a plant, or you know, yeast, or you know, basically any other organism that you'd have around your house, for example, the sequence always goes in the same order. DNA copies itself. So you can go from DNA to DNA. And then DNA is transcoded, to make RNA, but you can't go backwards, you can't go from RNA to DNA. And then in the ribosomes, RNA is used to make protein. But you also can't go backwards there. You can't take protein and then reverse engineer it, and then use it to make RNA. Viruses, in some cases, are an exception to this, you can have like an RNA virus that has an enzyme called RNA-reverse transcriptase, where the genetic material of the virus is RNA, and then it produces a substance that lets it take that RNA, and then translate it backwards into DNA. And then, you know, sort of stick that DNA into a cell of the organism and to infect the organism that way, most, you know, organisms, like humans and so on cannot do this.
SPENCER: So my understanding is that doing this kind of genetic engineering work, like for example, perhaps making a bacteria that can glow in the dark, and that kind of thing has just become dramatically cheaper. And maybe a lot of people don't really realize to what extent that's happened.
ALYSSA: Yeah, that's right. People talk about Moore's Law for electronics. We've also had Moore's Law for, you know, gene sequencing, which I suppose has actually gone faster than Moore's Law for electronics. So we filled up this, you know, this enormous library of you know, all of these gene sequences for humans, but also for like a zillion other organisms – for bacteria for animals, for you know, things like yeast and plants and so on. And then gene synthesis has gotten not quite as cheap, but still like sufficiently cheap such that you can sort of like order genes to synthesize
SPENCER: Got it. That's to the point where people are like, able to do these experiments in their garage, right?
ALYSSA: Yeah. So there's a lot of stuff like this on YouTube. There's a particular YouTube channel I really enjoy called The Thought Emporium, and he's produced a lot of videos like this. One of them is the bread made with beta carotene yeast.
SPENCER: Oh, interesting with the idea that it can give you some of your vitamins in the bread?
ALYSSA: Yeah, so beta carotene is the stuff in carrots and is also found in some other vegetables. It's a precursor to vitamin A. And he inserted it into the yeast organism that is used to make bread. There are tons of cool examples. The most famous example is Justin Atkin who is the author of the channel and the one who makes most of the videos, some of op-eds, a lot of cool guests on sometimes, he's lactose intolerant because his genome does not have the lactase persistence gene on the lats, adults digest milk. So he actually genetically engineered a virus that would inject the lactase gene into cells. And then he took it himself.
ALYSSA: He at least temporarily cured his own lactose intolerance. I like to really emphasize then that this doing this to yourself without like a lot of rigorous testing first is a bad idea and you should not do it.
SPENCER: Wow. So basically, he created a virus that when injected himself, it would insert this gene and two copies of his own cells?
SPENCER: Wow, that's pretty wild stuff.
ALYSSA: Another video is the spider silk gene You know, of course, spider silk is legendary for its high tensile strength and all these other cool properties. We haven't been able to make it available commercially, because it's really hard to breed spiders. It's a lot harder to breed spiders than silkworms. They're, you know, even more so like sheep, and so on.
SPENCER: Got it. So the idea is, we could actually just get plants to grow the spider stock.
ALYSSA: And this is just, you know, getting bacteria to grow the spider silk.
SPENCER: Oh, wow. Yeah, my understanding is that this stuff can be, ridiculously strong, but also flexible, like almost like thread or something.
ALYSSA: Spider silk is made down actually out of several different proteins because spiders need several different types of silk to spin their webs.
SPENCER: Oh, sure. I didn't know that.
ALYSSA: Yeah. So you can adjust the properties of it, like, how elastic it is, how sticky it is, and so on.
SPENCER: So the immediate thing comes to mind discussing this is when is someone going to make some horrible virus that they release? You know, what are your thoughts on that?
ALYSSA: Yeah, somebody asked about implications, what came to mind is positive implications, which is available to sort of hard to, like, you know, list all of the cool stuff that we'll be able to do, because like, we don't know what a lot of it is yet. Because if we knew what it was, and would have done it already.
SPENCER: What are the kind of categories of cool stuff that could be on the genetic engineering, right, like, you know, in theory, we could modify detrimental genes or insert helpful genes after we're born, right? That'd be pretty amazing.
ALYSSA: Yeah, it's sort of tricky to like do that. If you're talking about like humans, so tricky to do that with like the entire body all at once, at least when someone is already an adult because you'd have to get it into like every single cell, it's a lot easier to do it with like a limited population of cells.
SPENCER: Got it. So like in one organ or something?
SPENCER: I see. So that may be an avenue towards new treatments. And then also these kinds of interesting applications, like you mentioned, of changing the way food is like giving it more vitamins, or trying to make it healthier, or making new materials like the spider silk. Any other categories of interesting applications come to mind?
ALYSSA: Yeah, Ginkgo Bioworks, is now a fairly large company that is applying it to the perfume industry.
SPENCER: Oh, interesting, to try to make kind of new, interesting sense?
SPENCER: Cool. And then in terms of the bad stuff, should we be afraid of this? You know, how risky is this stuff?
ALYSSA: It's a lot harder to make bioweapons than it is to like insert single individual genes. The DNA for like the smallpox virus is it's like, you know, it's online, you can find it, but then the entire virus is like, a lot larger and a lot more complicated than just like a global gene.
SPENCER: Well, that's good news. Although, you know, someone once said that something like the average IQ needed to destroy the world is like dropping every year. It's a little disturbing, you know, things go too quickly. You know, at what point is it just actually not that hard to make incredibly dangerous stuff?
ALYSSA: Well, the flip side is if anyone is able to engineer their own virus, will everyone be able to engineer their own vaccine?
SPENCER: Yeah, as with many of these things, it seems like there's this race between, you know, technology to do bad stuff, and then the technology to mitigate bad stuff. Unfortunately, I feel like the technology to do bad stuff tends to be easier than mitigating it. You know, you think about like computer security, it's like 1000s of times more difficult to secure a system than to attack it, right? And I hope this asymmetry is not general. But it seems to me there maybe is something about that asymmetry being general.
ALYSSA: There were several different DIY COVID vaccines. Unfortunately, none of them were tested very rigorously. So they might work. But the problem is that we can't be sure because it was a lot easier to make them than it was like to test them super well.
SPENCER: Did any renegades inject themselves with them?
ALYSSA: They did, but we don't know how well, they worked. Because there's no like rigorous clinical trial.
SPENCER: Well, I mean, that is really amazing to think about a future where there's a new virus, and then just like 20 people around the world invent vaccines actually work? That'd be Incredible.
ALYSSA: Sure it's like there are 20 different vaccines available on www.vaccinebay.com. Which ones work? You know, your 75-year-old grandma, how is she going to figure that out?
SPENCER: Yeah, it's a problem. Now we need citizen science like randomized control trials to occur, right?
SPENCER: People opt in to, you know, be randomized to different vaccines. And yeah, it's a wild world. So another area where technology is obviously advancing really quickly is AI. I'm curious to hear your thoughts on sort of how you think people will think about AI in the wrong way. And like, what you think kind of better ways of thinking about it are?
ALYSSA: I'm an AI engineer, and I've been interested in AI since I was a child really, so I could calm for quite a while about this. But one thing in particular that I've been thinking about recently is that people, I think, sort of by default, think of AI's as being like much more similar than they'll actually be.
SPENCER: Similar to each other you mean?
ALYSSA: Yeah, similar to each other.
SPENCER: Right. So rather than being like, oh AI is like this or as like that, it's like, you really have to get into details like, what is the AI? You're talking about? How is it designed? What was it trained on, etc?
ALYSSA: Yeah, um, so computers are, you know, the technical term is what we call Turing complete. That means that you, a computer if it's large enough and fast enough, can compute anything. And in particular, that means, you know, once AI is sufficiently advanced, a computer can compute, you know, any mind or at least, you know, any mind field that is physically possible.
SPENCER: Right. You're limited by computational constraints, and also by our ability to design algorithms that can do that computation, right?
ALYSSA: Right. Um, so I think maybe a bit more evocative term, instead of, in some sense, maybe a more accurate term than AI might feel alien summoning magic, because if you imagine, any alien from any science fiction book that you've ever read with any kind of society, any kind of thought or language, anything you can possibly think of, and if it's physically possible, for an alien to think that way, and to work that way – once AI is sufficiently sophisticated, not now, probably not this decade, but eventually – you'll be able to write an AI that thinks and acts in the same way that that alien does.
SPENCER: Interesting. Well, I was looking at this example of an AI that classifies images. And they put this classic optical illusion in it where, viewed sort of a certain at a certain rotation, it looks like let's say, a young woman, and another rotation looks like an older woman. And the kind of remarkable thing is that the AI tended to flip to thinking it was a young woman versus older woman, like our bow at the same time that I did, and a lot of people did, to somehow, even though the AI wasn't trained to, like, see things the way a human does, it was sort of having similar properties with regard to this optical illusion. But on the other hand, we've also seen lots of cases of kind of adversarial examples for AI, where you kind of can cook up input to an AI that let's say, to a human would look like, oh, clearly, that's a dog. And then like the AI will say, there's like a 99% chance that's a teacup or something like that, because you've kind of added some, some little bits of noise to it that like, convinced the AI it's a teacup that's been reverse engineered based on the kind of knowing how the AI works. So do you have any comments on that? Should we expect AI is to some degree converge on certain aspects like being tricked by certain optical illusions, whereas in other cases, we wouldn't expect them to converge?
ALYSSA: Yeah, I think humans sort of want everything to be human-like. There's this tendency that's called anthropomorphism. So yeah, any evidence of sort of AI being human-like, or sort of neural networks, being brain-like, is going to get played up a lot. And I don't think, neural networks are very brain-like, I'm sure if you poke around, you can find, some analogies convolutional neural networks have, some analogies with the visual system. But something like GPT-3 is – it's more different from you than you are from a rat or an insect. It's like really, really different.
SPENCER: Yeah, the idea that these are kind of alien minds is sort of disturbing to me because we want to be able to reason about our AI systems, right? If you think about it, how do you know that your friend isn't just going to just randomly kill you one day, right? Or just do something really disturbing and upsetting suddenly? We know that because we sort of have a good sense of the way their mind works, right? Like, you're like, “Oh, well, this person is really nice. And they're conscientious and they are ethical, and so on.” And it's like, we want as you say, we like wanna apply this to everything want to anthropomorphize, but like, to these alien minds. Like, it's actually kind of dangerous to do this, like, “Oh, well, this AI I'm chatting with has always been nice to me.” So, therefore, it's going to behave in a certain way. It's like, no, no, that mental model, like doesn't even make sense, necessarily.
ALYSSA: Right, exactly. So there are several different pieces to that. One is that we want to be able to design a mind that that is sort of, reliably not going to do anything like kill humans, even when it has the power to do so, which it mostly doesn't now, but you know, in a few decades, it may acquire.
SPENCER: They're already starting to hook up AI to like death robots in the military. So I don't know.
ALYSSA: Yeah. And then so even sot of once we know how to do that sort of design work, which we definitely don't right now. Once we have the technology to design AIs, the technology is out there. And you know, everyone has computers that can design AIs and everyone has tools that can design AIs. There's nothing to stop anyone from just creating like more and more AIs, which is you know, why I think you know, alien summoning magic might be a good term for it. Because you know, if you have you know, if there are like 100 types of AIs in the world, there's nothing to stop you know, anyone from just creating more me or you, or Google, or the Chinese government, or the North Korean government, or anyone could just, keep on creating, let's try this kind of device AI, let's try this kind of AI. Let's try this kind of AI.
SPENCER: Right. It's rather disturbing that it's not good enough to just make some safe AI, that it's predictable. It's like, you have to worry about as any wildcard AI being near being unsafe and unpredictable, right?
ALYSSA: Yeah, humans, you know, are already having enough trouble. It was part of a society, with 8 billion people whose DNA is 99.9% identical to each other? How would we cope, you know, with the society with 8 billion different types of AIs, that, you know, all these different, you know, computer programmers created, each one of which is as different from us and as different from each other as you are from a monkey or you know, as you as you are from like a rat even.
SPENCER: And with humans, that typical mind fallacy is so strong, where we tend to assume even other humans are just like ourselves, and we often just way over exaggerate their similarity to us with the classic example being, you know, someone conformed mental visual imagery, they assume that everyone can even, in fact, there are some people that can't. Or if people hear their own voice in their head, when they're reading, they assume everyone is like that. So even with humans that are actually probably really, really similar to us, mostly, we tend to way overestimate, they're similar. Yes, you know, just seems like that's going to be an even bigger problem for these kinds of alien minds that really are probably actually almost nothing like us.
ALYSSA: Right. Exactly. If we manage to, you know, sort of develop this technology without causing some sort of horrible disaster, it will be this, you know, the sort of transition from, you know, viewing lines, you know, meaning us centers to some extent animals, as being a sort of fetus, sort of magical soul like entities to being like an engineering discipline of minds, to have in mind seeing things that that are made out of parts, where humans have, a visual system and an audio system, and motor system and so on. Once you can sort of, create minds in a computer, you know, then you can mix and match and, you know, redesign these parts, you know, and sort of, you know, anyway that you please.
SPENCER: Yeah, it's really weird to think about mind engineering, and that being a discipline that people engage in. And for anyone who kind of hasn't seen AIs that sort of see mind-like, I definitely recommend checking out GPT-3. Some of the examples of that are a little bit exaggerated, because they might have actually taken 20 tries to get to corresponds, but definitely GPT-3, to me at least feels like the most mind-like of any AI I've ever seen.
ALYSSA: Well, you can play with it on AI Dungeon.
SPENCER: Yeah, AI Dungeon is a good way to try it out. But you have to have the premium plan, I think on the dragon model to be able to see it. So another thing I want to ask you about is the forgetting that we have as a society, the way that we tend to kind of forget our history, erase things. Do you have any comments on that?
ALYSSA: Yeah, so a big announcement of me and a bunch of other people I know who have been involved in the longevity industry, trying to work to reduce the effects of aging and help humans to live longer, I'll just read from the press release here. “The first major attack on the aging syndrome using the methods of Toronto therapeutics will be by the newly created National Foundation for anti-aging research in New York City. This organization will carry out large-scale animal screening and clinical testing on the vectors uncovered by gerontology and Toronto Therapeutic Research. The objectives of this foundation include the development of practical anti-aging agents that may be used by the population under the supervision of medicine, this foundation will concentrate upon the improvements of health and lifespan at various levels of the population from about 30 years of age. Therefore, Toronto Therapeutics is essentially preventative medicine.”
SPENCER: Sounds very modern.
ALYSSA: Except, of course, it's this is not actually a new announcement, this is actually a press release from 1951. And I don't actually know what happened to this organization, but just seems to sort of disappear into the ether. And no one has ever heard about it ever again.
SPENCER: It's really interesting how there are so many ideas that seem to come up again, over and over throughout history, and then they kind of disappear. I mean, one example of the sort of, like utilitarian thinking, where you think about kind of maximizing the well being of society as like the ultimate goal, especially viewed through the lens of like, you know, increasing happiness, reducing suffering. I mean, this is an idea that actually kind of you would appear, you know, one place in history, and then it would disappear for you know, a long time and then reappear again, and kind of now it's coming back with the Effective Altruism movement. I don't know if any thoughts on that.
ALYSSA: My friend Matthew sort of wrote about this, I'm going to talk about what he did a few years ago, where he's, like, you know, effective altruists, we love and support Effective Altruism, but we weren't the first people that think of this, you know, motorists in China actually had this idea many centuries ago. And just it's a cool idea, but you know, in order to sort of, survive and prosper over the long term, the idea has to be like a sustainable and like competitive mimetic environment.
SPENCER: Yeah. So what are some other examples you'd point to of ideas that kind of appeared and then disappeared or we like misremember them or we erased them in our past?
ALYSSA: Now, if you heard of MAOIs.
SPENCER: Some kind of drug, I don't know what it is, though.
ALYSSA: Yeah, back in the 1950s, they were testing, you know, different treatments for various diseases. And they found that there's an enzyme called monoamine oxidase that is used to as used to sort of break down various substances in the brain. There's a class of drugs that inhibit the activity of this enzyme. So monoamine oxidase inhibitors or MAOIs. And they noticed that when people took these substances, and they made them, you know, much happier and sort of more energetic, they thought, oh, you know, that they're these people who have what we would call nowadays clinical depression, or major depressive disorder. And they thought, oh, you know, might this be a good treatment for depression? And, you know, there were clinical trials run, you know, this being the 1950s. It was, you know, abbreviated to compare to now, but, you know, they did run clinical trials. And, you know, it did get approved by the FDA, you know, such approval existed in the 1950s. And it was very widely used for several decades. But during the 1970s, they became a lot less popular, because there were a bunch of newer drugs that were discovered, there was a class of drugs called the tricyclics. There's a class of drugs called the SSRIs, selective serotonin reuptake inhibitors.
SPENCER: Like Prozac would be an example, right?
ALYSSA: Yeah, I think yeah, things like Prozac and so on, that didn't have some of the side effects of the MAOIs. So for the most part, sort of everyone switched over to those. And the older class of drugs is sort of like largely forgotten about, but it turns out that if dig into the data, that number one, a lot of the side effects were exaggerated. It's sort of like a game of telephone, where every layer of the game of telephone has an incentive to be more conservative or insert additional layers of caution or like layers of warning.
SPENCER: Because people want to avoid blame or they don't want to seem irresponsible or something?
ALYSSA: Right so in the 1960s people who are on it allies develop till it's called a hypertensive crisis, from eating way too much food from the container substance called tyramine and tyramine is primarily found in aged cheese's and like fermented soy sauce and aged salami, and so on. And from a few people that developed this reaction, and then so people warned, okay, well, we have to watch out for this reaction. So okay, you know, we have to watch out for you know, so eating fermented cheese, so no one on this drug can eat fermented cheese. Okay, so no one on this drug can eat any cheese at all. Okay, so no one on this drug can eat any cheese and any chocolate and any beer and like a whole long, like other lists of things. And then eventually, you know, most people just stop subscribing to it. Because yeah, it list of like, don't do this, you know, grow sufficiently long, you know, eventually, just why bother?
SPENCER: So what is the kind of most modern thing on this?
ALYSSA: So in modern times, a few people have gone back and looked, and they found out that the original class of drugs was actually much more effective than a lot of drugs that are prescribed in the modern era. If you look at sort of the DALY impact, or the disease burden of depression, kind of ridiculously large, the way that's computed is, you know, you look at everyone in the country, and you count, you know, how many people have this disease? And then you multiply it by, how bad is it for them, how much does it impact their life? How risky is it? What chance does it have of killing them, you know, how much does it make them suffer, and so on. And you can do this for like any disease. And if you do this for all the diseases that we know about, I think like in the US, like the number one thing is in terms of total badness like you know, heart disease, The number two thing is depression. The number three thing is cancer. And then I think the Number four thing is like alcoholism or drug addiction or anything like that. It's actually really, really, really bad.
SPENCER: It's statistics like this one that inspired me to create UpLift, which is our app for people with depression. So I definitely relate to the idea of what a massive problem this is. So now, are people actually prescribing these drugs again? Or is it still not taking off?
ALYSSA: Some people are. There are much of literature reviews that have been done, the thing that this is a tool that's been underused, and people are under-prescribing it, and so on. I don't think there are really that many people sort of like arguing against it. Like if you go to Google Scholar and search for the papers. It's just that because these drugs have been around for so long and this sort of like off patent. There's no like pharmaceutical company with like, a big investment to like, you know, call off 100,000 people and say, here's this literature review on Nardella. You know, you ought to read it. So, some people are doing arguing for it now, but there's a Have people sort of haven't heard about it yet?
SPENCER: And you can imagine people have some kind of effect around it, like, Oh, those are kind of old fashioned or, Oh, those are bad or dangerous, right? Like, people may not have any, specific knowledge about them, but just to kind of view them unfavorably, which might affect their willingness to prescribe them.
ALYSSA: Right, exactly.
SPENCER: Do you have another example you wanna talk about or a sort of, like forgotten history?
ALYSSA: There's also a lot of forgetting and politics and a lot of sort of, like, motivated forgetting where people believe thing A, and then they'll sort of switch to believing thing B, and then they'll have this sort of like an implicit agreement to just sort of like, never talk about thing A ever again, I think this is like bad for our epistemology.
SPENCER: Right. So with, like, the Iraq war be an example of that.
ALYSSA: Yeah, that's actually exactly what I was thinking of, if you look at the maps of like, you know, voting in the US, most red states in 2004, were still like, red states in like, 2016. And like, you know, most voters, you know, don't usually move between states all that much. And, you know, most people don't change parties, all that much that there's some movement, you know, but not like a huge amount. So most people who voted for George W. Bush, and like, you know, the Iraq war in 2004, we can be pretty sure they voted for Donald Trump in 2016. But they have like, completely opposite position.
SPENCER: Yeah. So I mean, one thing that strikes me about the Iraq war in particular, is, it seemed like, there was like, a lot of convergence from a lot of different people that like, it was a good idea at the time. And, you know, it wasn't just one party or you know, or one powerful person. And yet, it turned out so badly that you make a lot of sense that people kind of are incentivized to pretend that they were not involved in that decision at all.
ALYSSA: The New York Times, you know, which people think of nowadays as being in this sort of leftist, anti-administration, newspaper, they did a lot to support the Iraq War and to support the Bush administration's conclusions about it.
SPENCER: It is pretty interesting, how they can be this kind of, like, a complete rewriting of history, it does make me wonder how many, like great ideas we've lost, you know, in that what, like, actually were believed, you know, 20 years ago, or 50 years ago, and then kind of just fell out of fashion. And, you know, we don't have them back yet.
ALYSSA: One of the things that we saw with COVID, for example, is sticking around for metaphors for this, and a friend of mine who grew up in Russia call that the aerial shoe-changing championship out just what they call it in Russia, because, you know, the Russian government, you know, changes its mind so much, and everything is that, you know, the policy changed so much and so quickly, and then as soon as it changed, you know, everyone just sort of forgot it. Oh, you know, you shouldn't wear a mask, because, we either need to save those for health care workers, their masks don't work or whatever. And then a month later, you know, masks were mandatory, and everyone had to wear masks. And the same thing with a lot of other stuff.
SPENCER: Yeah. And with that one, in particular, I think there might have even been some political flipping where it was like, at first liberals were like, Oh, you shouldn't worry about that. And conservatives were like, oh, no, there's this virus coming. And then it kind of like slipped in it. Yeah, I don't know. It's just really bizarre.
ALYSSA: I actually went and dug up some of the original papers from 1918, the Spanish flu epidemic. And they were obviously not as sophisticated, but they had some experience with, hey, you know, we tried to quarantine all these people. It didn't actually work. We tried to get everyone to wear a mask. Sometimes it worked. Sometimes they all just sort of gave up, and so on and so on. And this seems like useful advice. But like, that was totally forgotten about, you know, as far as I can tell, you know, I am the only person I'm aware of who like ever read this paper, you know, in the last like, decade.
SPENCER: Well, yeah, it's a really interesting point. It's like people, I've actually gone through this a bunch of times, and maybe we just tend to view people from the distant past as just sort of, like, unsophisticated, right? And like, we view them as like, oh, well, they didn't have our technology or, and we view ourselves as so wise, I'm not sure we actually are wiser, I think we definitely have better technology, right? Like, I'm not sure we're better at thinking, Oh, what do you think on that?
ALYSSA: I'm honestly not sure what to think because people talk about the Flynn effect people you know, nowadays have higher IQs you know, than people in like, 1900. And, in some sense, you know, that makes sense, because the nutrition is improved, we have fewer toxins floating around, and so on. But if you look at the literature, for example, that people have written his poetry better now than it was 100 years ago, is modern detective stories, are the best of those, better than like the Sherlock Holmes stories and so on says, I'm honestly not sure what to think.
SPENCER: Right. You look at political speeches like, it seems pretty clear that they've gotten worse on average than they used to be.
ALYSSA: Benefits of going and digging through history, primary source documents, especially you can find, you know, things like MAOIs and lots of other technologies that are sort of cool, but have been forgotten about and I think it also lets you have better epistemic because you can be aware of what positions do people have in the past? You know, what do people argue for, you can like read things like, you know, from the Confederacy back during the Civil War era, obviously, they're not accurate, but you can be aware that, you know, in the 1860s, people were talking about, oh, you know, the glorious march of science has proven, you know, the genetically inferior race, and so on. And then on the flip side, you know, people were trying to prove that the Irish were of African descent, because, you know, they wanted them to be inferior, and you know, all this other stuff that people would come up with. It's useful, seeing what they got, right? And also seeing what they got wrong.
SPENCER: Right. There's sort of a word of warning of like, hey, look, people were couching all these terrible ideas in the name of science. And, you know, maybe this is still happening, right? Yeah, that's super interesting. So the last topic I want to discuss with you before we wrap up is about how do we form high trust communities on the internet, it seems actually potentially really important, because a lot of the kind of traditional communities that used to exist in society, like religious communities, you know, like going to church every Sunday, or like local kind of neighborhood stuff, they seem to have gotten over time kind of splintered, or weekend, partly due to people becoming less religious, and also, you know, other factors. And now with COVID, you know, my trouble is even worse, so, yeah, I'd love to hear some of your thoughts on this topic.
ALYSSA: If you look at, you know, sort of the world's – my great grandparents would have been, I don't want to idealize it, because, you know, I get those as, you know, someone who's not a Christian, I'm certainly aware, that many problems that it had, but pretty localized. Yeah, because, you know, it was harder for people to move. And so, people would form these deep relationships with each other, with the same person for like, a long period of time, that's pretty different from like, a lot of the societies that we have today. I think, most people, I would guess, have, like, some level of need for that, for having a high trust, high investment society. And if I compare it to sort of the communities that I'm in, Discord, for example, is a great sort of community-forming tool – I guess you could call it – that we have on the internet. I'm part of a lot of great Discord communities, but then sort of the level of investment that you know, each person has, and you hit those communities is mostly pretty low, they don't put in that much time, usually, they don't put in any money really. And then usually people can just leave, and it's not that big of a deal. And so it's sort of like a different thing that's sort of like leaves a hole. I think, for a lot of people,
SPENCER: Would you trust someone just because they're part of your Discord community? Because that seems to me like a really essential part of having one of these communities,
ALYSSA: Right. And the answer is usually, no, of course not. I don't know who they are anonymous, which has some advantages, but of course, also has some disadvantages. And oftentimes, the way that Internet communities compete, is that they make it really easy to join. One of the ways that Facebook took off and became the world's biggest social network is they said, “Oh, it's totally free, takes 30 seconds to sign up. It's really easy, you know, to come in and join.” But then, you know, if it's really easy to just sort of entering, how do you sort of like, you know, screen out untrustworthy people, you know, how do you make sure you know, everyone is sort of like, committed, if you're trying to set up like a closer group, the two things are sort of, like across purposes.
SPENCER: Yeah. And it seems like there are at least two different elements at work towards high trust. One is this person is similar to me, right? Like, this person has my values, this person has shared beliefs with me, right? So that similarity can help you trust the person more, but the other is reputation. You might think, well, if this person betrays me or does something bad, then everyone on the committee is gonna know. And they clearly don't want that to happen. Like, even if they are just acting selfishly, like, hopefully, that should keep him in check from at least doing the worst behavior. So it seems to me that with the sort of online communities, we can get the first thing to some degree, like maybe some of these communities, you can say, well, they wouldn't be in this community unless we at least thought somewhat similarly. But it's very hard to get the second thing like because especially if they can create an anonymous account, but even if they don't, you know, maybe are they really gonna have the reputational effects in these kinds of Discord channels that you'd get, if you say, lived in a small Christian town, and you kind of did something really bad, where everyone will kind of find out and hold it against you.
ALYSSA: Yeah. In order to have like, you know, the high degree of investment in trust and loyalty and so on, it seems like maybe I'm wrong, you have to be able to have some kind of like consequences for people who then abused that trust or who acted badly, but then if you have consequences, you know, someone has to impose those consequences. So then how do you prevent that from being abused? And this is the conundrums of the ages and certainly, you know, when religious communities have the power to impose consequences, they certainly misused it in some cases they would punish people for being with the wrong person or for you know, for being gay or whatever. I think what when internet communities have tried to do that sort of thing I think they've sort of fallen in often fall in even sort of worse failure mode, where the way they try to set up like high trust, you just have one leader, and they sort of just like create a culture around themselves. And then just sort of like the leader has the power to like exile people. But then you know, oftentimes, like the leader is a sociopath who just abuses this for his own benefit.
SPENCER: That's unfortunate. So ideally, it'd be like a benevolent dictator for life. But actually, it's just like a dictator for life. Not to put too fine a point on it. But do you think it's actually more that they're sociopathic? Or that they're narcissistic?
ALYSSA: I don't really know. Categories like that. I'm not sure if I put too much stock in those categories like that. I think the important thing is just like, they're bad people who are doing it for a bad reason, and you should not join them.
SPENCER: Right, right. Well, the reason I bring it up, it's like, I actually do notice the strong clusters that differentiate sociopathic behavior and narcissistic behavior. What they seem to have in common, what I can tell is that they both tend to involve low empathy, and they both involve harming others, but sort of the motivational structures are different, like the sociopathic way to hurt others, because like, you don't feel any empathy, and you're just being like, selfishly optimizing, and you're just kind of indifferent. And if someone gets in your way, you'll harm them, because it's just better for you. Whereas the narcissistic approach is much more like, I want people to worship me, I want everyone to think I'm amazing, my ideas are better than everyone else's, I'm more entitled to everything than everyone else. And sort of like, it's a lot of like trying to get everyone to feed your ego.
ALYSSA: That's a fair point.
SPENCER: But anyway, no going back to these communities, so any ideas like how can we build better communities actually give us more of what kind of what's lacking in modern society?
ALYSSA: I sort of mark this down as an open question, because, you know, I could speculate about it, but I don't really have any great ideas. I know, people who have made attempts, I'm part of communities that, you know, we're sort of, you know, originally on the internet and tried to some extent, you know, to form into real life, and they've had some successes, but also a lot of failures. So, you know, we're still working on it, it's still an open problem.
SPENCER: Any patterns you've noticed? And like, what's gone well?
ALYSSA: I think, you know, so the communities that I've been part of attracted a lot of interesting people, certainly a lot of interesting people, you know, go to something like Burning Man, which is a real-world event, but then it's sort of hard to define here, sort of like an identity boundary around like, you know, who is part of the community and who isn't. And then I think that creates a sort of like a limit on how much of an investment you have, which then creates a limit on how much the community is like a coherent entity that can work together. If you compare this to like the Mormon Church, there's a fairly defined line around who is a Mormon and who isn't? And people who aren't Mormons, on average, certainly have a pretty different life for people who aren't Mormons. Whereas for something like Burning Man, anyone who can get a ticket can come to Burning Man. And I'm sure there are statistical correlations. But, you know, there's no membership application process for you for coming to Burning Man.
SPENCER: Yeah, interesting point. If I think about what makes something a community, right, like, if you want to construct really a community that gets the benefits of kind of classic communities, it seems like, first of all, you need to have a large amount of time spent together, right? Like, it doesn't really make sense to talk about a community that like, doesn't interact very much, right. So it's like, you want there to be a lot of interactions, a lot of time spent interacting mutually among that group of people, right? And I think there's one thing that Burning Man struggles with as a community, because a lot of burners go to Burning Man. And yeah, there are these other events periodically throughout the year, but the reality is, they're just kind of not that much time to be together overall. The second thing is values, which are of all the different things that humans care about, what are the ones that this group prioritizes? And I think it's hard to have a community that doesn't have like, you know, one or two top values that like, are generally accepted in that community as like, the most important, you know. So in a Christian community might be like, worshiping God, or acting with piety or what have you, you know, in a rational community, it might be trying to be rational and figure out the truth about the world or something like that. But it seems like this kind of is a fundamental element, like shared values. A third is trust, which, as you mentioned, you know, with this core, you don't get you don't get that much of this. This is basically like, you know, let's say you found out someone else was in the community, and you just met them, like, would you let them stay in your house when you're away? Or hey, would you lend money to them, right. And I think this is something that like, you know, Mormons probably are much better at other groups. Like, I think Mormons probably trust each other a lot more than like, a lot of groups trust random other group members.
ALYSSA: And it's like, you can't just naively increase the level of trust, which is the solution. You know, I've seen some people try to do because if you just let in random people and then give them a lot of trust, well, then sometimes they backstab you and take all your stuff, which is also bad.
SPENCER: Yeah, that's a really excellent point. And I think that leads to fourth thing that communities have to do right is they have to have a way of dealing with defectors, right. So if someone violates the trust what happens either they have to have a way of basically kicking people out and have to be reliable enough that it doesn't just get taken over by defectors, right? Or a way of reforming people within it or, you know, basically, or just a way of selecting people so carefully that there just aren't that many factors, right. But this seems like totally essential to a good community.
ALYSSA: Yeah. On the first point, there's also the question of, you know, for something that, like, on some level is, larger than, you know, social scientists have called Dunbar's number, which is about 150. People, she has, you know, 150. And the largest number of people where sort of everyone can know, everyone else, there's the question of how do you shard it, which is sort of a computer term where, you know, everything can't fit on one computer? So, you know, how do you sort of like break it up to into multiple computers? So if you have, you know, 10,000 people, you want this community to spend a lot of time together, it's so big, everyone can't spend time with everyone else, which subsets of people, you know, does each person spend time with? Like, how do you sort of like sort that?
SPENCER: Yeah, it seems dramatically easier to build like a small community of 100 people than to build one of like, 10,000 people. But that being said, it seems like communes of 100 People usually fail so. Even that is just a surprisingly difficult problem. But then imagine trying to scale that where like, now really, people can't know each other, they have to just trust more on the fact that persons in this community in now, you're gonna have a lot more defectors or like, really bad people involved that you somehow have to screen out, or we read out after the fact that because just that much more complicated. Another thing that I would point to as being sort of really essential, I think, is this idea of like membership criteria, right? You can't really have a sustained community, we're just it's like, instant joining by anyone who feels like it, right?
SPENCER: If you could have that, then the community either would eventually just get to me two factors coming in, or it would just get too watered down. It's like, oh, our values are this, but then there's like, more and more people could joining until the values or just whatever, they're just the background, you know, homogenous values of whatever society they're in.
ALYSSA: Yeah, regarding size, for communities that are like lower investment, I think maybe that's true. For communities that are like higher investment, I think maybe you need some sort of like critical mass or critical size, which in some cases, maybe it helps. Because it might, it might help you sort of like ensure stability, because, like Mormons, for example, type, they donate, you know, 10% of their income to the church. And that's like a major sacrifice to make, you know, if you're making $100,000 a year, you know, to give, you know, $10,000, you know, per year, every year, you know, to this organization, and you're not going to want to do that, if you know, it's this group that just, you know, sort of sprung up last month, and you know, who knows, you know, if it's even going to be around next year. If it's a larger group, you know, that has, you know, sort of more institution likeness and you know, more organizationalness, you know, if those are words, that you might have more confidence that, okay, you know, if I give, you know, sort of money to it now, or if, you know, if I sign up for like a volunteer position now, and I do a lot of work, then, you know, 10 years from now, this group is still going to be around and people are still going to see the benefits of that.
SPENCER: Right. Maybe the small communities and large companies just have a different major problem, like a small community has a high probability of just kind of falling apart completely, right, like everyone dissipating, whereas like a larger community, and once you have 50,000, people, it's probably not gonna just like disappear tomorrow, right? Like, it's gonna have momentum. But maybe there can be all these, like cracks at forming it, where it like, kind of, it stops being itself or stops being good in, you know, many different ways, and nobody can kind of stop it, right? It's this kind of complex system. And it's hard to kind of maintain that level of trust and maintain the shared values and the quality and so on.
ALYSSA: You get sort of like political problems that no one quite knows how to solve, like, American political polarization or whatever, although maybe it is slightly smaller scale.
SPENCER: Yeah, I think going back to the like, the topic of group membership, I think that that's really interesting. Because like, if we assume that you can't allow everyone to join a community because like it kind of undermines it, then you think about, well, what would you do in terms of group membership, like, one option is that you try to select for people who have certain traits, like they have, you know, really strong shared values, right? Another is that you create like a membership cost like you make it difficult to get into the group, right?
ALYSSA: But then, you know, if you make it difficult to get in, if people leave the group faster than they join, then you know, the community is going to cease to exist, then maybe like the Mormons, you know, everyone has to spend you know, a few years of being an evangelist in order to cancel that out.
SPENCER: Right. Yeah, that's really interesting and kind of can be self-defeating. But a third thing you can do is you can just make it costly to continue being a member which is the kind of the same problem that can like drive people away, but it also helps solve the problem of like, well, it was costly to be a member then you're probably not gonna just stay a member just for the heck of it. You know, you have to like be more devoted to bothers continue being a member. And you know, that could be meaning that you have to tithe by giving away money, or it could be that you have to, you know, do all this community service. Or maybe you just have to, like, believe really weird things do really weird things that make you rejected by the rest of society. And that's like the cost of membership.
ALYSSA: One interesting question. It's like the Internet has seen sort of like the emergence of like, large groups that are just sort of like completely anonymous and completely ephemeral. Like the GameStop stock purchases –
SPENCER: With Wall Street bets?
ALYSSA: Yeah. Just like hundreds of 1000s of people who didn't know each other at all, all just got together and bought the same stock, like same week. And then they all sort of like went away. What are these things? Are they good? Are they bad? Like, what are they going to do? We've never had this before.
SPENCER: Yeah, it is really strange, because of the Wall Street bets phenomenon where they all kind of like acted in this coordinated way, in order to try to profit off of short investors in the GameStop stock. It's like they were the sudden emergent entity that was very powerful, right? It was powerful by virtue of all of them being willing to act in a coordinated way, in order to achieve a coordinated goal. And it's going to emerge and then disappeared. Yeah, I mean, it's really interesting, think about it because these kinds of conglomerations of people actually can do really interesting, but also terrifying stuff. I mean, another example would be, let's say, someone, you know, makes a joke, and someone finds it offensive, and then suddenly, they find themselves like being piled on by, you know, 1000s of people on Twitter, and then they lose their job like that also is kind of a temporary agglomeration of coordinated behavior. And, you know, in some cases, maybe that's good if the person really, really was a bad person. In other cases, maybe that's just totally unjustified. And you know, the person is making a silly joke, and really doesn't deserve it at all.
ALYSSA: And then to tie in something from the earlier discussion is part of, you know, the fourth system of warfare going to be learning how to manipulate these mobs. Like, if you're the Chinese Communist Party, it's to your advantage to pick out you know, who are America's best nuclear weapons designers? Who are America's you know, best, you know, policy professionals, your congressional staffers, who are America's, you know, best generals, so, you know, whatever the or whatever positions, and then maybe he'll like fabricate evidence, or like, you know, dig up something from their past, and then, you know, manipulate Twitter mobs going after them and getting them all fired.
SPENCER: No, man, this is a terrifying feature. I think a lot of people don't know this. But Russia actually experimented with campaigns on Twitter, to get people to believe in events that didn't happen. For example, they created a fake story about a factory blowing up or something. And they got like a whole bunch of people to tweet about this was really weird. And like, and I think in that case, they actually had no particular reason, like, in other words, they weren't actually trying to achieve some specific goal. I think they were just testing the methodology of like, oh, we can actually fabricate a completely false news story, and get lots and lots of people tweeting about it.
ALYSSA: Yeah, that's part of the fire hose of falsehoods strategy. There's a Wikipedia page about this, sort of like an alternative spin on sort of propaganda, where if you're, like, you know, old-style, like Soviet propaganda, or, you know, you try to get people to believe in oh, you know, the Soviet Union is, you know, this glorious world power, and it's the best country in the world, and it's going to defeat the evil capitalists, and blah, blah. And the Soviets obviously spent a lot of time and energy on this, eventually, people sort of realized that it wasn't true. In the end, you know, once people realize this, you know, it's sort of hard to undo that. And, you know, now you know that I think Putin is taking a different approach. And I've talked to, you know, over the internet, if you can go on to like the Russia subreddit, and it's really fascinating to talk to Russians about things like you know, Alexei Navalny you know, Putin, or the corruption scandals or something like that, because they have these bizarre beliefs, or, like, you know, at least you know, beliefs in the sense of things that they will tell you online, but the beliefs are all completely contradictory. And, you know, they will contradict each other, you know, sometimes in the same thread, can't just, like, prove something false. And then people will say, “Oh, that's false. You know, he is untrustworthy.” Rather, they try to, like get people into a state where, like, there's a million different possibilities, and they're all false. So they just like believe everything and nothing if that makes any sense.
SPENCER: Right. It's like if you want to convince someone of falsehood, one way to do it is just push that falsehood over and over and over again, through every channel. But another way to do it is just to spread so much misinformation, or so many different types of so many different theories, that people just like, can't find the truth. They're like, how do you pick out the true one among this huge pile of falsehoods,
ALYSSA: I wish I could express it better. There's not really a substitute for actually like going there actually, you know, even better, you know, going to Russia and other authoritarian countries actually seeing for yourself which, unfortunately, I haven't been able to do with, you know, the pandemic. People say these bizarre things like Alexei was arrested, you know, by the Russian government on these made-up charges that they won't even say that, “Oh, you know, Navalny deserve to be arrested or you know, Navalny is guilty” or something like that. They say like, “Oh, you know, Navalny was being protected by the government. And, you know, he's actually guilty of all these other things. And actually, you know, the government is trying to get him off for some bizarre conspiracy.” Like, none of it makes any sense.
SPENCER: Yeah, I mean, not to get on our high horse. We've got a popular QAnon conspiracy in our own country here. Of course, of course. Yeah, I mean, it's obvious it's harder to resist when it's coming from the top right. But the whole point is, you know, if the country is like officially trying to push falsehoods on the people, that makes it even harder, but like, in our world of social media, we get these like, kind of runaway belief systems where I don't even know if QAnon believers believe things, they're not similar to each other. In other words, like, it's almost like this constantly mutating idea complex, rather than sort of a single, here's what we believe.
ALYSSA: We saw that like, a lot of people thought that like, you know, Biden was not going to be inaugurated. And then Biden just was inaugurated. And then some people did change their minds, I think about some things, but then other people just switch to like different beliefs.
SPENCER: Right. It's like you either deny or you're like, no, no, Biden's actually not inaugurated. This is all just a farce, or you stop believing QAnon, or probably the most likely scenario, you find some new explanation within your system, your belief system, right? You like do the minimal, like change your belief system to try to incorporate this new fact you can't deny and then like, who knows where that goes, like, you take in all kinds of different weird directions.
ALYSSA: Prediction markets have, I think, become bigger over the last few years. With the elections and also with cryptocurrency we have cryptocurrency prediction markets, maybe it would help if they were like, you know, a common thing that like, you know, everyday people, you know, frequently participated in.
SPENCER: QAnon conspiracy prediction market, that'd be fun.
ALYSSA: Right. Because, you know, we saw that, you know, like, a month after the election, people were still betting that Trump would win a month after the election was over. There's some question, you know, in a sense, what is the belief? You know, do people really believe things, you know, or are they just saying them, but if you're willing to bet, you know, $50,000, on something happening, you know, that there's certainly a sense in which you do really believe in. If you keep betting, you know, on things happening, and like, and you keep losing money, this keeps happening, like over and over and over again, maybe people would notice if they kept losing money.
SPENCER: Yeah, that's it. That's a really interesting idea. Like if they kept seeing either they're directly losing money, that'd be the best lesson, or they just noticed that like the side that they're on keeps losing the prediction markets. That's pretty interesting. Although just as a side note, I heard Vitalik talk about like, why that discrepancy persisted in the market and talked about like, sort of, actually, it was quite difficult to make a trade on it. And because it was sort of so skewed, like, if there's a 95% probability and prediction market that Trump is going to lose, like, it only takes actually a relatively small amount of money to like, keep the 5% number, whereas like, you have to throw in a ton of money in order to make a reasonable trade on the other side. So there's this weird kind of unbalanced thing there. Although, that being said,
ALYSSA: I really made money on it.
SPENCER: Yeah, I think I made a lot of money on it, too. I was using the, you know, one we can easily do here in the US. But that being said, I've also heard people like reading forums, and people were making bets. And yeah, there were a lot of like QAnon conspiracy theorists and Trump diehards that couldn't possibly believe, you know, he could lose like, so I think that actually is a real phenomenon. Like people actually just believe these things. Oh, listen, this was super fun. Thanks so much for coming on.
ALYSSA: Cool, thank you.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms: