Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
November 2, 2023
Who's Moloch? And what do we mean when we call something "Molochian"? What does healthy competition look like? How can we avoid or extricate ourselves from Molochian scenarios? Are our instincts about fairness and unfairness usually accurate? Is it possible for today's social media giants to create products that people want to use and that are actually good for people to use? What kinds of problems could conceivably be solved by "trustless" solutions? or "high-trust" solutions? Where do you fall on the "rationality-to-woo" spectrum? When do we not want to find rational explanations for mysterious phenomena? What kinds of new rational explanations might we find if we opened our minds to more "woo"?
Liv Boeree is one of the UK’s most successful poker players, winning multiple titles during her professional career, including a European Poker Tour Championship and World Series of Poker bracelet. Originally trained in astrophysics, she now works as an educator and researcher specializing in the intersection of game theory, technology, and catastrophic risk reduction. Her main focus areas at present are the risks posed by artificial intelligence and other exponential technologies. She is also a co-founder of Raising for Effective Giving (REG), an advisory organization that uses scientific methods to identify and fundraise for the most globally impactful charitable causes. Her latest project is the newly launched Win-Win Podcast, which explores the complexities of one of the most fundamental parts of human nature: competition. Follow her on Twitter at @liv_boeree.
Further reading
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Liv Boeree about unhealthy competition, and solving Molochian problems.
SPENCER: Today's episode is sponsored by NetSuite. Your business was humming, but now you're falling behind. Teams buried in manual work. Things you used to do in a day are taking a week. You have too many manual processes. You don't have one source of truth. If this is you, you should know these three numbers: 36000, 25, 1. 36000: that's the number of businesses which have upgraded to NetSuite by Oracle. NetSuite is the number one cloud financial system streamlining, accounting, financial management, inventory, HR, and more. 25: NetSuite turns 25 this year. That's 25 years of helping businesses do more with less closer books in days and drive down costs. 1: Because your business is one of a kind so you get a customized solution for all of your KPIs in one efficient system with one source of truth. Manage risk, get reliable forecasts and improve margins. Everything you need all in one place. Right now, download Netsuite's popular KPI checklist designed to give you consistently excellent performance absolutely free at netsuite.com/clear. That's netsuite.com/clear to get your own key performance indicator checklist; netsuite.com/clear.
SPENCER: Liv, welcome!
LIV: Thanks for having me.
SPENCER: Let's jump into a really fascinating topic which is the phrase Moloch. What does Moloch mean and why is it so important?
LIV: I guess the term Moloch originally comes from an old Bible story, or a legend from biblical times about this war obsessed cult who wanted to win at wars and accrue military power so badly, they would sacrifice the very thing they loved the most to this demon God thing called Moloch. They would sacrifice their children in order to win at wars. That was what they believed. This legend has passed on through the centuries as this warning of sacrificing too much of the things that you care about in order to win at a narrow goal. In popular culture nowadays, Moloch has become synonymous with this idea of misaligned game theoretic incentives. Like short-term incentives encouraging people in a competition to optimize for short-term goals which if everyone follows those short term goals, on net makes the system worse off. Perhaps the easiest example, the classic example people often talk about is a military arms race. Where countries pour more and more of their GDP into developing defense technologies or even offense technologies. Because if they don't do it, then they're going to be left vulnerable to their enemies who they presume are doing it. So more and more resources get pulled into the military, into the war machine effectively, and less resources are available to the citizens. In an ideal world, all the countries would agree, "This is stupid. Let's not do this. Let's coordinate." But because of Moloch, because of this trap of misaligned game theoretic incentives, everyone ends up having to do the crappy thing anyway. So the system is worse off overall. Another way you could think of it is, it's like the God of unhealthy competition or negative sum games.
SPENCER: I was about to ask how intrinsically linked to competition it is. If you have just a one party system, is Moloch just not in play?
LIV: I think some people would say yes, some people would say no. For simplicity, I prefer to define it around competitive systems where you have lots of different agents essentially competing for something. I've heard some people claim that actually a one party system is the end point of a Molochian process. Basically, one was so good at power maximizing that they've managed to install a dictatorship. In that sense, if it turns out to be this kind of dystopian dictatorship, then that was probably the result of a Molochian process because the power maximizers were able to accrue power fastest by being ruthless and thus, winning. Although, if it was some kind of benevolent one party system, I guess you'd probably say that wasn't a negative sum game because the end state is desirable. So that's technically not Molochian. But for simplicity, think of it as the misaligned incentives acting on competitive systems which have many agents within them. Although technically, the simplest form of it is actually the classic two-person prisoner's dilemma where the payoff is incentivized, assuming they only play at one time. They incentivize both players to defect to do the selfish thing. But if they can figure out how to coordinate then overall, from a God's eye view, they would both be better off. You can think of Moloch usually as acting on a multi-way prisoner's dilemma.
SPENCER: Just to clarify for people who are not familiar, in the prisoner's dilemma, you have two prisoners and they're being pressured to rat on each other. Each of them is better off ratting no matter what the other person does. Whether the other person rats or not, they're actually better off ratting. However, as a group, the total sum of utility is greatest if neither of them rat. But the problem is because each of them have an incentive to rat, they're both going to do it. Unless there's some way they could coordinate to make some kind of unbreakable agreement. Did I get that right?
LIV: Yep, that sounds perfect.
SPENCER: Great. It's a weird example because maybe we want prisoners to go to jail and from a utility maximizing view, it's an odd choice. But the point is that we can have systems where everyone is incentivized to take a certain action that actually harms the whole group and it'd be much better if we could all agree not to take that action. A classic example might be, there's a river and we all drink from it. We really don't want other people polluting it but if other people are polluting it, it's slightly more convenient for us to pollute it. The main problem: the pollution is not our pollution, it's everyone else's pollution. So then everyone pollutes it.
LIV: Exactly. Most tragedy of the commons type scenarios are a result of what I like to call a Moloch trap. This is like the Molochian process where, as you say, the short term incentive is that it's easier for me to just pollute than to install an expensive and annoying filtration system. But if everyone ends up doing it, then the river ends up ruined — the classic tragedy of the commons. It's an interesting term because there's all these synonymous phrases like tragedy of the commons, coordination problems, social dilemmas, arms races. Those all fall under the category of Moloch. Moloch is like the personification of this very personified collection of game theoretic forces which is why I like it. Because it gives us something to actually point at. These misaligned incentives are the driving force of so many of our biggest global problems. If you dig down into almost any major dilemma that we've got in the world today, it's the same kind of process of, "I don't really want to do this crappy thing but if I don't, the other guy is going to do it anyway. So I might as well." Just competition gone wrong. I think it's useful to have a bad guy with a name and a face that we can imagine. Like the end boss that we've got to figure out how to defeat.
SPENCER: I like that. Because it's so hard if you just think of it as just an abstract problem that just rises naturally. Whereas if you will pinpoint and say, "That's the enemy." Then you can rally people to actually care about it. Part of this has to do with unhealthy competition, what is healthy competition? Because it seems pretty clear that sometimes people competing is actually a good thing.
LIV: A classic example of healthy competition is the Olympics. Some people don't like it. But I think overall, it's a pretty nice thing that every four years, there's a thing that someone in every country on Earth watches, and it brings the world together through the process of sport and competition. Where everyone cheers on their own countries or even other people like they're their favorite athletes or sports people. It's a shared activity that the whole world can enjoy and can speak the language of. Almost everyone on Earth speaks the language of sport in some regard. Everyone on Earth speaks the language of competition actually. Even though there's only a fixed number of medals, that's a clear example of a competition that has positive externalities. Another example would be... Technically the Higgs boson discovery was a result of a number of different competing teams at CERN, trying to be the first to prove the existence of the Higgs boson and then verify each other's results. The scientific process involved a little bit of competition as well to drive things along. In business, innovation is often driven because there are clear rewards if you are the company that builds the best product to solve a problem. That's the way of using competition for good. I'd say, by and large, competition is a positive force that drives so much of the progress that we see. But there are times when it all goes a bit awry and very bad things happen.
SPENCER: Well, the Olympics is such an interesting example because I would agree with you mostly, it seems like the Olympics is a really good thing. But you can see Moloch hidden in it with these doping scandals. You start getting these teams where they might actually have an incentive to cheat, because it could make them win gold medals. Even though probably everyone would be better off if we could all just agree that nobody's gonna cheat. But then how do you actually enforce that?
LIV: A very good example of it. Like The Flight of Icarus, that documentary. I recommend everyone listening to this, if they haven't seen it, go watch it. It's one of the best documentaries recently where they uncover the doping scandal going on in Russia. The biggest headache that the Olympic organizers have to deal with is knowing that at any given moment, there are such strong incentives because of their organization (because of the power of the Olympics), there are massively strong incentives for individual athletes, or all countries, or teams to essentially defect from the agreement that, "No, we don't do doping. This is a clean sport. There's a list of drugs. Do not take any of these drugs to enhance your performance." Yet, there are massive incentives on people to cheat when you have that many different people competing and such a prestigious prize as winning a medal. It's almost an inevitability.
SPENCER: What makes the difference (fundamentally) between this competition that leads to Moloch situations versus healthy competition? Clearly, we can slide between the two.
LIV: The main thing is it comes down to the concept of alignment. If the incentives acting on the individual players are aligned with what's good for the whole, then the competition by and large, is healthy. But if the reward given to the individual for taking action x is not aligned with the good of the whole, then that's going to create negative outcomes. That's the fundamental difference. Obviously, it can be hard, because how do you define something as good or bad? This is where we need moral philosophy. Are we gonna be measuring universal utility before the game and versus after? It's obviously a bit speculative, but I think most people, by and large, have an intuition for when a competition creates a good outcome versus a bad one. The competition of who can create the most addictive social media platform, I think most people agree is clearly not a good direction. That's not a good application of the competitive force of capitalism. But if the competition instead rewarded social media companies that were best at educating on nuanced and complex topics in a way that encourages happy and collaborative social interactions online, if that's the metrics that these social media companies were were competing on and therefore designing their algorithms to create more of, that would be a healthy example of the capitalist process in action. It really comes down to this idea of alignment and misalignment.
SPENCER: If we go back to the Olympic examples for one moment, and we think about what sort of worlds would you find there to be doping scandals? And in what sort of worlds would you not? You could imagine a world where the Olympic Committee is so good at detecting drugs in people's systems. They have such a robust system that it just doesn't make sense to try to dope because the probability of getting caught is so high and the penalties are getting so large. It's irrational to do. Then you wouldn't have doping. Or you can imagine a world where maybe doping is so expensive or so dangerous. Just because in that world, chemicals are hard to come by or they're really dangerous for whatever reason that you also wouldn't have to have big scandals. It seems to be this subtle thing about, can you make the cost of doing the bad thing large enough that people don't just endlessly engage in it? Does that seem right to you?
LIV: Exactly. You need really high costs. You also need better information gathering because part of one of the things Moloch thrives on is if there is imperfect information available to the players. One of the reasons why people are probably more incentivized to cheat (for example) in the Olympics is they feel like they're already not running in a fair race. If they think there's a chance that their opponents actually might be cheating, even if you give each one only a 5% chance but you're running against 30 people, it's actually statistically very likely that one of them is cheating, so it's easier for people to talk themselves into doing the defection themselves because there's not clear information. It thrives on uncertainty and the ability for people to be deceptive. Then if the rewards are huge but the penalties are really low, then that's another way it'll manifest. The other thing it requires is a lack of oversight in terms of a governing body. If there's some form of centralized figure that can create a way for everyone to essentially coordinate, then it's much less likely to happen. At least in the case of the Olympics, they do have a governing body and individual countries have their own Olympic Committees and so on. But even then, it's still very hard because of the number of people involved. The more people you have involved, the harder it is to coordinate them.
SPENCER: We've talked about: you need costs for doing the bad actions, you need good information, a governing body can help. I guess I would add to the list also pre-commitment mechanisms: if somehow all the Olympic teams could prove to each other that they're not going to dope, then that could solve the problem, Or the prisoner's dilemma, if the two prisoners could prove to each other that they're not going to defect, then neither of them would defect.
LIV: Some kind of trust mechanism which I would put under the same category of some ability to coordinate. I guess it doesn't have to be centralized.
SPENCER: Coordination could be centralized or decentralized, but you need the coordination.
LIV: Probably a bit of both. The most robust form would have both decentralized and centralized coordination, ideally.
SPENCER: Now, would you say that this is a source of a lot of the biggest problems in the world?
LIV: I would say, arguably, it's the source. I've heard it be described as the generator function of a lot of different catastrophic risks, existential risks. If you take climate change for example, part of the problem is that companies are competing against each other for market share. So they're incentivized to use the cheapest fuel sources which are typically the dirtiest. Or they're incentivized to not put expensive filtration systems in, and so on. There's that sort of misaligned competition going on and they're corporate level. Similarly, for countries, they're trying to grow their GDP. They don't want to get left behind their neighbors and so on. They'll resort to, again, the cheapest which are typically the dirtiest fuel sources. Amazon deforestation, it's the same thing, farmers are competing against each other for space. On the edges of the Amazon rainforest, it's usually cattle farmers. It's not like they want to cut down all of the rainforests, but it's like, "Well, if I don't do it, the guy down the road is going to do it. So I might as well anyway." They're under the same kind of competitive pressures to defect. The race to artificial general intelligence, it seems as the competition heats up, as more and more players essentially enter the game... — Before it was a one horse race. It was just Google DeepMind. Then Open AI came along, then Anthropic and then Meta jumped in. — More and more companies are starting up trying to race to AGI which is turning up the competition dial, and that's incentivizing individual companies to pour more of their resources into progress and less into safety. Because ultimately, the rewards are so big, it's hard to optimize for both at the same time. That's another example. Breakdown of information and trust online, another really good example, the media landscape is more competitive than ever. It used to be just big old legacy companies, the big NBCs, and so on. Now, the internet has democratized the ability to make everyone a citizen journalist which in some ways is amazing. But it turned up the incentive pressures to do whatever you can to get people's attention which unfortunately, tends to lead towards using headlines that are very inflammatory and cater to echo chambers. Clickbait, essentially. That's creating one of the main drivers of why we're seeing so many of these culture wars dominate our online landscape. Because again, everyone is incentivized to put whatever is the most attention grabbing inflammatory thing they can and that's usually something that is upsetting and angry making [sic]. Again, same process. Almost every one of these big problems that we have you'd be like, "Ugh, if only we could coordinate." That's Moloch and it's basically all of our biggest problems.
SPENCER: As you've been saying that I've just been thinking, what are the kinds of problems that are not Moloch related? It seems to me that Moloch tends to come up when you have reasonable people who are either altruistic or neutral. They create massive problems. Like you have a whole bunch of entrepreneurs that are either altruistic or just neutral, they're just trying to make money, and they end up causing calamity. Or you have a whole bunch of AI researchers or bio researchers who are acting reasonably and are somewhere between neutral and altruistic, but they end up causing calamity. Whereas there may be other types of bad things in the world, like where you just have a serial killer who just enjoys hurting people and they go around hurting people. Maybe that would be outside the scope of Moloch. What do you think?
LIV: I would say so. Moloch is less about intentional sadism and more about operating within the rules of the game to an extent. You could argue that it also incentivizes cheating, but a lot of these problems that are driven by a Molochian process are just the crappy design of the game itself. It doesn't require any sadism or joy in crushing enemies. I've also extended the definition a little bit to understand the mindset that often kickstarts the Molochian process. It does often require a small subset of people to be so self-interested that they're willing to do the crappy thing in the first place. An example I gave in one of my videos is — because it was something that very personally affected me — I noticed these beauty filters that are now endemic to social media. If you polled almost every woman on earth, most women would be like, "These don't make me feel better about myself." I noticed when I would upload a photo I really liked on Instagram. Then I would apply one of these filters that were freely available on the app to enhance my features, I would look back at the original picture and I wouldn't like it anymore. It's like, this is not good for my mental health. But again, if you're playing the rat race of trying to become a social media influencer, the fact of the matter is, hotter pictures just get more likes and they translate into more followers. So everyone is incentivized to use the latest sort of beauty weapons (as I like to call them) and these AI filters to enhance their photos, to enhance their image. But if there was a way to have gotten everyone together in the first place, I think if everyone could see the logical end point where this is going to be, the vast majority of people would have been like, "I'm not going to do this. I can see this is gonna be bad, not only for my followers and their mental health but even for my own mental health. I'm not going to do it." But because once a few people start the ball rolling, now it's very hard for anyone to stand up and resist against it.
[promo]
SPENCER: It seems to me that one of the key aspects of a Molochian system is that you don't need to have any malevolence or any bad intention to do the bad thing once you're in that system. Because the system itself pushes you to do the bad thing. Everyone could even be altruistic and you still might end up greatly harming the world.
LIV: In certain situations, yes. Like one where everyone is truly acting in good faith and best intentions even for... Let's say, countries in the Cold War. They probably didn't want to pour more and more of their money into nuclear weapons. But they legitimately had to defend it if there was a credible threat. If you're the US and there's a credible threat from Russia that they now have 100 nuclear weapons and you guys only got 10, it's like, "Shit. We've got to build more. Because otherwise, we truly can't protect our citizens with a threat of mutually assured destruction." In that situation, truly arms are tied. I think there are other situations where your arms aren't quite so tied. Like the beauty filters thing, technically you can be like, "No, I'm going to make a stand. I'm not going to use these filters." In fact, I noticed that I actually won followers in the end making a point about this. So I think it depends a little bit. There's different grades of intensity of the extent of the trap. That's why I think it's actually important. When I talk about Moloch, I don't only talk about the systemic problem but also talk about the psychological driver of it. Because, if you have a Moloch trap, there are two ways of solving it. One, you can redesign the rules of the game so that the incentives don't create this bad outcome. Or you can get everyone within the game to wake up to the fact that their actions, if everyone does it, will create this bad outcome. So take a small personal hit, take the short term cost for the long term good of the whole, as opposed to the other way around. If it was way too late, making everyone simultaneously enlightened basically, would be the other approach. I don't know which of those is... When you're dealing with these really big scale problems like how do we overhaul the media industry so that they stop doing clickbait, I don't know which is more likely: making them all enlightened enough to say that they won't optimize for these short term metrics, or just redesigning the whole system from the ground up. Both of them seem equally impossible, frankly. But that's where I'm at in terms of thinking of how we can start solving it. There's two pronged approaches but sadly, no solution yet.
SPENCER: Do we know of examples where the Molochian system was resolved because of the bottom up approach where there's lots and lots of individual people either refuse to engage in it, or refuse to follow the incentive there?
LIV: Arguably, any sort of example where a large moral change happened, seemingly universally, like the fact that slavery is now no longer an acceptable thing in most parts of the world. In certain countries, there are top-down effects of the abolition of slavery and so on. But I think in many cultures, it seemed like people just started realizing that this is not okay. If you think about it, people saying, "I'm not gonna keep slaves anymore," that was an economically difficult thing to do. They're taking a short term personal loss for the good of the long term haul. That could be one example of it.
SPENCER: Maybe especially outside the US. In the US people are forced into it.
LIV: That's more an example of a top down one. Another example might be animal welfare. Once upon a time, there was a widespread belief that people didn't even think animals could feel pain. Over time, we have fortunately started realizing that animals do have moral worth. Social norms started to develop before any animal welfare laws were actually enforced.
SPENCER: Certainly. If you were engaging in dogfighting, a lot of people would just judge you socially and put pressure on you to stop. Even though it is illegal, there will be other bottom up forces occurring there as well.
LIV: Arguably, naturally as communities, we punish selfish behavior. I think that's one of the reasons why people often say humans are naturally Molochians. They can be but I would say overall, humans are actually more naturally the opposite. Because if you think about the hunter-gatherer tribe times or even early societies, if people were acting too selfishly (taking more than their fair share of food and so on), that would have been bad for the tribe and the tribe would have come together and kick that person out or punish them very severely. Cooperative behavior was directly rewarded and the incentives were aligned with the good of the whole. I think we have a strong sense of when someone is being unjust or selfish and societies tend to have quite a good immune response to when people behave Molochian. So I think the bottom up approach is probably one of our most promising approaches to solving the Moloch problem in general.
SPENCER: I suspect that historically, 100,000 years ago, when you'd have extremely selfish actors crop up who would exploit some community, there was this natural mechanism to eject them essentially. This was part of the role of morality, it was like, "Hey, this person is violating our norms." Maybe the person can get away for a while, but eventually gossip spreads, people start connecting. Even if the person is powerful, if enough people in the tribe come together and say, "This person is bad," eventually, they kick them out of the tribe. Now, it restores the equilibrium of altruism.
LIV: That's a good point. You mentioned gossip. I think it's a very powerful anti-Moloch tool. That's probably why it evolved in the first place. Obviously, it can be used for nefarious purposes as well. But by and large, gossip served and still does serve a purpose. If someone is behaving atrociously, it's in the interest of the rest of the tribe to know.
SPENCER: One thing I'm wondering about in this conversation — I don't know whether it fits in Moloch or not — is when a belief system that's very harmful comes about. For example, a religious fundamentalist belief system that says you should try to kill innocent civilians. Or it could be a political belief system that is extremely authoritarian and says that citizens shouldn't be able to get their homes. Would you put these harmful kinds of belief systems in your Moloch Framework? Or would you say that that's a source of harm that's outside of it?
LIV: It's certainly true that ideas themselves are competing in meme space. Whether or not that competition between ideas is aligned with the good of humanity, I couldn't answer that question. I could see cases where certain ideas that are very good at mimetic spread go very viral and can become weaponized. That would be a Molochian process. But it's hard to answer that with any certainty.
SPENCER: It seems like systems where the optimization pressure is on spreading may have a tendency for these kinds of problems to arise. Because if the intent is to spread, then the things that win are just the things that are good at spreading, not the things that produce anything else of value. Maybe that's how ideas tie in. If we're living in a world where the ideas that spread are the ones that win, the ones optimized to infect other brains, then we shouldn't expect them to do very much other than spreading. They shouldn't necessarily promote human values.
LIV: Totally. It's somewhat orthogonal. Something's ability to be viral and spread is at best, non-correlated and may even be anti-correlated with what's actually good for people. I could see that it could definitely be a Molochian situation.
SPENCER: There's probably no better example of that than social media. Should we talk about that in particular? Because it seems so relevant here. Where do you see this coming up in social media the most? Then, do you see any plausible ways to break out of a bad equilibrium in social media?
LIV: It's operating on lots of different levels. It's obviously operating on the individual users who are trying to get famous, grow their followers, and make money through adverts on their content. They're incentivized to do whatever they can to go viral which may or may not be correlated with the actual quality of the information or whether that information is actually good for people. There's clear incentives. If you can somehow create content that is just very addictive and make people want to keep watching it but actually it's making them stupid, or just feeding that information, or making people angry, that's a very unpleasant outcome. Then between the platforms themselves, they're all competing for market share. Everyone has a finite amount. The user base has a finite amount of eyeball time that they can spend on their phones. Although to be fair, that number is growing probably every single day on average. But still, these companies are all incentivized to use increasingly aggressive tactics in order to make their platform the most compelling to their users. Compelling doesn't necessarily mean being healthy. It can be but by and large, if you're trying to make anyone be on a screen for 10 hours a day, if your profit model is directly correlated with getting someone to just be on their phone all day long, it's almost certainly not actually good for that human being who's on the other end of that phone. It's just Moloch all the way down when it comes to social media. The question is, what can these companies do to make their products actually more aligned with general human health without getting punished for it in terms of their bottom line? Part of the problem is that the most modern business structures maximize shareholder value as the number one thing. That's a big problem because it's such a narrow metric. It doesn't factor in all the other externalities of creating a highly addicted populace (for example) into its P&L.
SPENCER: Clearly, social media is really relevant here, where we see Moloch could play a very big way. Tell us about where you see social media and Moloch connecting.
LIV: It's down the whole stack, frankly. You've got it playing out at the individual level where, if people are trying to grow their followers or just go viral, then they're incentivized to do whatever they can to get the most attention grabbing content which is often orthogonal to things like truth or nuance. In many cases, it's often completely anti-correlated. You've also got it out there between the companies, because these companies are all competing for eyeballs and market share. They're incentivized to do whatever they can to make their platform the most compelling. Because their business model is directly tied not to user happiness but to user engagement: how much time is spent, how many ads can they serve up, and so on. Frankly, until that fundamental nature of how they play their game has changed— the metric that they're optimizing for is broadened or redesigned so that it is reflective of user health and happiness, as opposed to just user retention and the amount of time spent, then I think it's going to continue being a Molochian situation.
SPENCER: I've seen people propose solutions to this where they're like, "What we need is a new social media site that isn't optimizing for time on site. It's optimizing for some other metric." But then my immediate question is, isn't it just gonna get out competed? Are people gonna spend all their time on the other sites that aren't optimizing for time on site?
LIV: This is part of the problem. Then you have to go a level deeper and be like, "We need to redesign the whole game of media in general. Not just one particular social media platform. All the social media platforms." But then they're competing against the legacy media as well. You've got to make them not be optimizing for the wrong thing. That's the nature of this Moloch problem. The more you dig down and down, you realize just how deep this goes. I'm not dead certain on this but it feels like the real root cause of this is that much of — I don't mean to sound in any way an anti-capitalist because in many ways, I love capitalism. I think it's done far better than any other ism, any other economic system, in creating prosperity and on Earth. But it's got a fatal flaw, which is it's trying to distill everything down into a singular metric. Whether it's profit and loss, or how do we maximize money? There is information lost in that compression process. Basically, distilling all of the things that happen amongst the competition between companies down into this singular prize, you end up with all these negative externalities. We really need to find a way to internalize these externalities into this competitive calculus going on between companies. So that the actual environmental costs, whether that's physical environment or the mental environment of people and the informational landscape, until the environment stops being polluted and the costs being incurred upon the environment are properly incorporated into the metrics that these companies are optimizing for.
SPENCER: One thing I wonder about is, obviously companies have to optimize, at least to some extent, for things like time on site, right? They do need people using their products. But could they push some of that optimization power to people feeling good in retrospect, like feeling like their time was well spent? If you imagine these two metrics, like how much dopamine or how much of a positive feeling you have in the moment when you're doing the thing, and the retroactive evaluation of, "I'm glad that I spent that time on that site." Could they push more of the optimization pressure than is currently nabbing us to the second thing?
LIV: It's actually a very good point. Back to when I've gone on an Instagram scrolling session, in the moment, if you could ask my brain, "Are you having a good time?" The answer is probably yes. Because clearly my finger wants to swipe up and see the next video and the next video and the next video. But if you can take me away from that, and I get the chance to then look back at it, "No, I'm not happy about those two hours of my precious life I just lost watching cat videos." So I don't know how that would look. But if there was a way to incorporate that feedback into the corporate review process, and in some way affect their bottom line, that would be a way of incentivizing them to change their offering, essentially.
SPENCER: Well, you could imagine that it could create stickier long-term customers who are more loyal if they actually feel good about the usage of the product, rather than kicking themselves about how much they use it. Not to say that you would push all the optimization pressure to the long-term feeling good about the product but at least it might not be bad on the margin to push more into it. You could imagine things like, maybe periodically, at the end of a session, you ask people, "How good did you feel about the last three minutes?" You don't have to do that all the time but just even 1 in 1000 users could give them another metric to track.
LIV: First thing that came to mind was... But again, the platforms that do this are just gonna get out competed by the shitty ones that don't bother asking their users. I actually think there's enough momentum now. There's enough self-awareness amongst the average person that spending as much time on social media isn't good for them that they might start to respect the companies that are actually willing to take this high-risk approach of asking "Did you have truly a good time here?" My hope is that people will become sufficiently conscious of the problem that they'll start rewarding the companies that do take these, the opposite of Moloch approach to their users and actually start showing legit care for the users' health. That will be nice.
SPENCER: You've seen Apple do this a little bit with things like standing up for user privacy. To be fair, they're in an interesting position where standing up for user privacy doesn't hurt their business in a way it hurts their competitors. So maybe it's easy for them to do. But I do think it helps them build goodwill, and they say, "We're gonna take a stand on this thing that users care about and we're gonna help protect them from abuse." Well, we've talked a little bit today about: how do we solve these problems? There's a sense of hopelessness in your voice. I'm curious if you think about a toolkit for solving these really big Molochian problems, what should we be thinking about? What's the hopeful view?
LIV: A hopeful view for me at least comes from the fact that we've have successfully gotten ourselves out of Moloch traps in the past. The Montreal Protocol is a good example of that. In the late 70 or 80s, people realized that the ozone layer is starting to deplete. We realize it's coming mostly from CFC emissions, and so on. In theory it should have been one of these coordination problems that's too hard to get out of. Because the CFCs are really cheap and easy, and they make really effective products. Yet this was top down. Rules and regulations were put in place banning them and those rules and regulations held. That is, by and large, a problem of the past. Perhaps an even bigger and better example of this is the fact that 30 years ago, there were about 60,000 live nuclear weapons on Earth. Enough to kill everybody on Earth (I don't know how many times) probably more than 10 times over. Now we've managed to reduce that number. It's not like they've gone away, the problem, far from it. But the number is now somewhere around 13,000. That's something that you would not have expected to be able to happen and usually arms races only go in one direction. Through centralized planning — top down process, we developed a bunch of treaties and treaties, by and large, held. Things are starting to get a little bit defective again unfortunately at the moment but that's an example of defeating Moloch. Another one is the Antarctic Treaty. Antarctica in theory is a big old continent with probably a bunch of resources. It's very cold. But the fact that countries came together and all agreed, "We're only going to use this for peaceful scientific purposes." That has held since 1959. There is evidence that both top down and bottom up approaches can get us out of Moloch problems. I think in reality, you need both for a lot of these. You need to basically educate the people who are stuck in the trap, make them realize, give them just the right amount of sort of guilt and responsibility over their own actions without making them feel completely frozen, and also create some kind of coordination mechanism and rule enforcement essentially to know that they're selfless actions are going to be rewarded and others who may be selfish, get punished. We know what the core ingredients are. There's a lot of willingness to try. That's where my hope comes from. But beyond that, it's difficult. In all honesty, I oscillate. Some days I'm just really optimistic and I think we're gonna be fine. Other days, I'm really pessimistic. I think, partly, it is a function of just how much sleep I got the night before. Unfortunately, you're catching me on a day where I haven't had much sleep. The other thing as well, is that there are lots of people, in part, thanks to Scott Alexander who wrote this amazing blog called Meditations on Moloch which if anyone listening hasn't read, you must go read. It inspired me so much to start making videos and talking about this problem. The fact that so many people now get the nature of the issue gives me a lot of hope. Because there's just so many brilliant minds out there actively thinking, "Coordination, how do we build better coordination mechanisms?" There are projects within the crypto space. People within the AI community are very aware of it. I get a lot of emails from people I've never met, saying, "I've got an idea of how to solve Moloch." A lot of these ideas aren't necessarily viable. The point is, people are thinking about it and that gives me a lot of hope. Because in reality, our way out of the Moller problem, it's a problem of the collective and I think it's a solution of the collective. I'm just confident the hive mind is going to figure it out.
SPENCER: If we think about what's in the toolkit of solving this problem, we've got... Let's say you have this strange system with weird incentives such that players in the system, acting in line with incentives cause lots of problems in the world. On the one hand, you can increase the cost of doing bad things. Maybe a regulator could come in and say if you do this bad thing we'll punish you, send you to jail or fine you. On the flip side, you could increase the incentive to do the good thing instead. You could have philanthropists come in and say we're gonna give you rewards if you do the good thing. Philanthropy prizes. You can also do a bottom up approach. We talked about before, like trying to shift culture, change the views of the individual, get people to care more, get people to understand the implications of their behaviors. What else can we do? Those are three actions, but what else can be done in these kinds of circumstances?
LIV: One thing that is really exciting to me are these ideas of zero knowledge protocols. Basically, trustless verification. It's a product of the crypto world. I'm not an expert on this by any means, but people are designing protocols where you don't need to be able to trust the other person. The smart contracts execute without the need of being able to speak to the other person and know who they are and trust them. Those solutions seem like the way to go. I don't know if that counts, like changing the rules of the game. But that seems extremely promising.
SPENCER: You might put that into a broader class of technological solutions that can allow you to do things you normally can't do. Kickstarter is another cool example of that. In theory, you might want this thing to exist in the world, but you don't want to put money into it unless everyone else is gonna put money into it. Because then your money is gonna be wasted. Kickstarter says, "If everyone puts money in and we reach some threshold which is enough money to create it, then it will get created." Of course, in practice, it doesn't always work. Some Kickstarter projects are late or never come at all. But at least in principle, it does help solve this collective action problem. On the zero knowledge proofs, in particular, it is interesting to think about what sort of Molochian problems they can solve. It seems like they can solve situations where you need to know that someone else knows something without them telling you what they know. Going back to zero knowledge proofs, they seem like they can be helpful in situations where one party is sometimes called the prover. The prover can prove to another party, the verifier, that a given statement is true while avoiding conveying any information beyond the mere fact of it being true. They're just saying, "I can prove to you this is true but I don't have to actually tell you anything else about it." I don't know too much about the details of this but there was a proposed way to use techniques from zero knowledge proofs that are related to nuclear disarmament. The idea would be that you could show that something is indeed a nuclear weapon, without revealing any inner workings of how it actually works that might be secret. That could be a cool application.
LIV: That's exactly the kind of thing. Certainly just imagine, it'd be such a game changer. Part of the reason why mutually assured destruction unholds is because there is some uncertainty about how many nuclear weapons your opponent has. I wonder if... Now we're moving into this era of very cheap surveillance. Everyone will have close to perfect information about their opponents. Would that actually make the situation more or less volatile? It's interesting to think about. That said, I think using zero knowledge proof for disarmament, that actually makes a lot of sense. It'd be super cool if that could happen.
SPENCER: Some people will talk about this idea of Game B. As I understand it, Game A is the game we're currently running. It is an unsustainable world that will eventually cause calamity and that Game B moves to a different set of rules. I'm just curious, do you see that as connected to Moloch? And what's your view on this Game B idea?
LIV: I'm not by any means an expert on that. In principle, it sounds very cool. If I can try and distill it down, Game A is built upon fundamentally rival risk dynamics, so competitive. Whereas Game B, I think there is still space for competitive dynamics, but the primary mode of interaction is coordination, cooperation. From what I understand, it's been proposed as a solution to Moloch. When I've heard people speak about it, they tend to talk about these coordination technologies. Like using these collective intelligence type solutions to problem solving. But beyond that, I couldn't really speak to it.
SPENCER: I know that now you're working on this idea that you call Win-Win. Tell us about that. What is that?
LIV: It's very much a work in progress. I've been spending so much time ruminating and to an extent, almost embodying and dressing up as this Moloch character for my videos, inhabit its headspace. It got me thinking, "If Moloch is the God of lose-lose games, unhealthy competition, what's the inverse? What's the God of healthy competition and win-win solutions and situations?" I couldn't get beyond the name Win-Win. I was like, "Let's invent a positive God called Win-Win that can represent the inverse of Moloch." Two things I want to say about Win-Win. First of all, it's not the inverse of Moloch because that will put it on the same scale and put it on the same dimensionalities as Moloch. It's something much better and greater. It supersedes Moloch in terms of its ethos and its vibe. Basically, Moloch personality is about, you must win this game right in front of you. It's very mono-focused short-term, pathological in terms of it must win. Win-Win is very carefree and fun-loving, and just a good time. It's not like it's all Kumbaya, like, "We must all coordinate all time." Win-Win is mischievous. It knows when to coordinate and a lot of the time it will coordinate and do pure cooperation. But when there's space because it loves to play, it's like, "Let's do a quick competition. Let's have some zero sadness." So it loves competition as well. But crucially, it has the wisdom to know when to use competition and when to use pure cooperation and coordination. That's why it's on a level above Moloch. Because I don't have the intellect or the knowledge to come up with any of these proper systemic mathematically rigorous solutions, what I can do is paint a vibe of what we need more of as an aesthetic and mindset. That's the Win-Win ethos and I'm just trying to personally embody it as much as possible. Part of the reason I was so attracted to this topic is because I was a very competitive person when I was younger. Pathologically so. Maybe that's why I can embody Moloch so well in these videos and such, because I've been there. I must win this particular thing, caring about winning that particular trophy, or getting the top mark in that exam in a way that is making me miserable when I don't win. That's where I'm next thinking. That's my approach to finding a solution to this. Trying to create a character that represents all the virtues that would be necessary to have a win-win world that is free of Moloch. The thing is, Win-Win is so powerful it doesn't even mind Moloch existing sometimes. It's just so powerful it can slap it down when it starts getting a little bit too big for its boots. That's my hand-wavy solution to the Moloch problem.
SPENCER: The way that you described Win-win, for some reason made me think of Burning Man.
LIV: Totally. Burning Man is like Win-win. That's like when Win-Win goes to a party. Burning Man is obviously built on giving an abundance of playfulness, lots of games, and people know when to compete at Burning Man. Then if a game starts getting a bit too big, "Woah woah that's enough." Whatever the God of Burning Man is definitely Win-Win.
[promo]
SPENCER: You mentioned your own personal history of competition. You were a very serious poker player. Do you think that that influenced how you think about this now?
LIV: I'm sure it played a role. There's no more zero sum game than a poker game. Everyone sits down with whatever money and your win is always coming from someone else's loss. It made me acutely aware of this concept of, "What is the sign of the game I'm playing? Is it positive, zero, or negative?" It also made me acutely aware of the fact that you're playing a zero sum game in definition, but there are always externalities. In reality, there's no such thing as a zero sum game. There's either some kind of effect, whether it's positive or negative. The question is, how do you evaluate whether it's positive or negative? After doing that for 12 years, it's really hard to unsee the world through that lens anymore. I've seen people have the highest highs and the lowest lows from that game. I often oscillate on, is the world better off for poker existing or not? I think so. There are definitely some cases where it was the worst thing that could ever happen to someone: learning to play. So it definitely influenced my thinking. I wouldn't change my career path, that's for sure. I'm very grateful for it. But interestingly, I don't have that much desire to play anymore. I think partly because I don't want to focus on how I can help this bigger game that we're all in. I think there's another really cool book that people might enjoy that sort of relates to this topic. It's called Finite and Infinite Games. It's a very short book written by James Carse which talks about: are you playing... It maps on to this idea of Game A and Game B, as well and also Win-Win and Moloch. A finite game is winning the narrow game in front of you, who your competitors are, and how you defeat them. The infinite game, the only objective is to just keep the game going. You win by keeping the game going. That's the only winning condition. I think that's the kind of infinite mindset that we need more people to adopt. Who knows if a critical mass of people truly internalize and adopt that, then we might be able to just emerge our way out of Moloch problem? I wish we didn't hinge all our hopes on it. But I think that's another promising direction.
SPENCER: Well, I think it's really powerful. You're getting more people to think about this in terms of Moloch. Because I agree with you, it does seem central to many of the world's big challenges. We certainly need better solutions. It doesn't seem like the human track record is that great for these kinds of problems.
LIV: Well, we're still here.
SPENCER: Before we wrap up, I just want to do a mini bonus topic, which is completely unrelated. But what is the rationality to woo spectrum?
LIV: It's my little name for the different modes of thinking that I found I can fall into. If you've got rationality at one end of the spectrum, and this is to be fair, using the conventional definition of rationality which maps onto the idea of System 2. It's very linear thinking. It's about "There's an objective reality, how do I best make my mental map of that objective reality match it in as accurate a way as possible so that I can achieve my goals." That's one end of the spectrum. Perhaps a better way of thinking of it is like the objectivist, materialist, physicalist mode of thinking. Then you can go all the way to the other end, which is, everything is subjective. There are sources of information that you can't access through conventional means, whether it's spirits, gods, demons, or all of the woo stuff. I think most people have some kind of definition of woo that makes sense for them. But what I like about this idea of the spectrum is, I would say, if you'd asked me this question eight years ago, I was so far on the left side of it (the classic rationality side). I had no open-mindedness to the concept of any of these other things. Whereas nowadays, I find value in sometimes putting on the hat of being, "Maybe there isn't an objective reality. Maybe there is energy healing. Maybe there is a God. There are Gods even." I like that it's like a mode that I feel safe to play around with. And that it doesn't make me less of a good rationalist when I choose to do it. That's probably the best way to describe it. If that makes any sense.
SPENCER: Well, I'm sure there's a person that has been accused of being too far on the left of the spectrum. I'm curious, when you decide to play in this loose end of the spectrum, do you think of it as suspending disbelief? Like, if you're in a movie where you're like, "I'm in the movie. I know the movie is not real. But when I'm watching the movie, I want to act as though it's real because it makes it a better experience." Is it like that? Or is it different from that mindset?
LIV: It depends a little bit if I'm where I am, certain settings, all the substances involved. But no. There have been occasions where I felt in my core that this frame I'm in, it's not like I'm just putting a hat on, and I'm playing woo. It felt like energy healing, for example. I had a very mind opening experience, for want of a better word. I had a degenerative hearing loss problem that I was told was gonna make me go deaf. Then for want of a better word, someone exercised some kind of negative energy out of me and it fixed my problem entirely. From that moment, I was like... You could say it was just pure placebo. Placebo is powerful, and we don't understand it. My answer to that would be, "You can call it whatever word you want, but the point is some kind of energy transfer, information transfer has happened, that is outside the realm of any kind of conventional science." The science right now does not have an explanation for what just happened aside from just slapping the word placebo on it. That opened my mind to this idea that maybe there is some kind of external form of energy that got sucked out of me that was not belonging there in the first place. Or it was just that my body's energy was screwed up in some way. I use the term energy in a slightly metaphysical sense, not as in conventional, electrons moving around or whatever. In that moment, when I look back on that, that was real. What do I mean by real... In that moment, what was happening to me was so groundbreakingly salient and indisputable. It's not like I was playing along, this thing was happening. I was along for the ride, whether I liked it or not. I still live with the positive, real physical effects in whatever state of mind I'm in. At this point, it depends. Sometimes I'm gonna dip in. I don't fully get immersed in it and lose myself in it. But when there are times when I do lose myself in it, as far as I'm concerned, that is — as it feels in that moment — as real as it does, as this rational mode I'm in right now in this conversation. Hard to answer but I think that hopefully gets the point across.
SPENCER: A skeptic listening might say, "Well, how do you know, absolutely, for sure that it wasn't just a coincidence that your hearing improved, right around that time?" What would you say to that?
LIV: That's entirely possible. That's definitely possible. I wish I could repeat the experiment many times and get proper statistical analysis on it. I'd love to. We can try and figure out the probability that it is a coincidence. Like, what timeframe would have I accepted from the sort of energy healing thing happening to then my problem going away? Had it happened within a year, would I still have attributed it to that event? Probably not. Had it been a month after the event? Maybe not. But the way it went down was, there's this girl who did this energy thing on me. It was quite shocking. I said to her, "Am I going to be cured, at least?" Because I was quite scared after what happened. She was like, "You'll have the physical symptoms for another couple of weeks, and then you'll be fine." That's literally what happened. I can't remember how many days exactly but it's somewhere between 10 and 16 days. I had the last hearing loss problem. Then my hearing has been great. So it's possible. But that seems... This thing we're dealing with has such low base rates. It's like, which one do you want to choose? But that's interesting. In this instance, I've now had just enough of these little data points where I'm just choosing to accept that there is this realm of possibility to play with. Honestly, my life has been much richer, since I have opened my mind to these possibilities. It's also given me a space for optimism, because I had what felt like a miracle happen. I went from being in a world where I thought miracles were ridiculous fairy tales to maybe sometimes they are true. I'd much rather live in a world where those are. Now you could say, "But that's not being rational, right?" You're just like, "It's wishful thinking." But there is a degree of truth. I think even the most diehard physicalist, materialist, rationalist would say. The power of positive thinking, for example. Most people agree there is some value in positive thinking. There's been empirical studies and when people — I can't think of one to quote — but basically, where the people who do the positive thing are more likely to have success than the ones who don't. The line blurs about whether something is actually fictional or not, because our thoughts create actions and actions create reality to an extent. So it's a very fuzzy line to walk. I'm sort of consciously choosing to play with delusion, I guess, to the point where delusion may actually become real.
SPENCER: This makes me think about how (if it were true) that let's say, you could give some exorcism of some sort, and it could heal them through non psychological means — through some spiritual means. I would certainly want to believe that that was true if it is true. But if it's not true, and either doesn't work, or it only works through psychological means, I would want to know the true thing about it. I'd want to know, it doesn't work or if it works but it's through psychological mechanisms that could have been produced through any kind of placebo. I'm wondering, would you say the same? Let's say it is true that this worked, but it worked to placebo effect. Or it didn't work, would you want to know the truth in this case? Or would you say no, there's some value in it?
LIV: That's a really good question. It depends where I am on this. The rationalist would say, "No, I'd want to know the real truth." The woo me would go, "No, I want to stay. I want to keep this useful fiction. Because useful fictions makes my life better." I think it's a very personal thing. Some things feel miraculous to some people. Perhaps a better term would be, being open to an explanation of something that would completely surprise you. Because I think when you're in deep rationality mode, it's often hard to be surprised.
SPENCER: Or if something seems too unusual, you just reject it?
LIV: Exactly. You just go "I just don't trust my perception on it."
SPENCER: Right.
LIV: The woo side of the spectrum is where the most groundbreaking discoveries are likely to lie. Like Einstein dreaming up general relativity. He probably had to delve into the mental processes that take you into the woo zone. He was probably more in that zone for it to be able to get that kind of inspiration. Now, that's not to say that when he came up with this idea of, "Time is relative. There is no universal time." That doesn't fit well onto an objectivist, physicalist reality, certainly not in 1920, or never. I would wager that the next real fundamental breakthroughs, if we're going to make them, are going to come from that kind of mindset approach. Honestly, it's where a lot of my inspiration has come from around. Like the Moloch, Win-win stuff, which a lot of people seem to resonate with. That came from that kind of headspace. Whether it was like a psychedelic journey. A lot of these out of the box ideas do point to some direction of truth, for me, have come from that space. I can't imagine that Einstein, when he dreamt up general relativity, was deep down in rationality. He wasn't on that end of the spectrum when he was thinking about that stuff. Because you have to depart. We tend to have these real breakthrough type insights that upend how we understand reality and it seems like he was right. It's very hard to do that when you're in this rigid, objectivist, physicalist mindspace. I wouldn't be surprised if a lot of the low hanging fruit discoveries that are just outside of our reach right now, in terms of concepts, whether in philosophy are going to be more accessible through this very open to alternative explanation type of mindset. That's not to say that everything that comes from there is accurate. In fact, probably the majority of the stuff that comes from there is nonsense. But occasionally, there is a signal in that noise. The point is, it's a broader category of information out there that you have to go and sift through and play around with that you just can't really access when you're in this more rigid mind space, essentially. I think that's where the value comes from.
SPENCER: This reminds me of a spectrum that we've actually done some research on that we call skepticism being one axis then seekingness being another axis. Skepticism is about really resisting bad ideas, and shooting down ideas, and finding their flaws. Seekingness is about looking for ideas outside of what you'd normally would consider and trying to find great ideas that you've never heard before, giving ideas a chance, seeing them at their best. We fascinatingly found that in people — we asked people about their tendency for these two things — we thought they would be negatively correlated. In fact, they had no correlation. They were independent. That was pretty interesting to find.
LIV: That sounds like it maps very closely to what I call rationality, too. That's actually a better description of it. Frankly, it's sort of open-minded seekingness and more skepticism bordering on cynical. In reality, you need a healthy dose of both. I don't think it's necessarily a problem if some people spend more of their time on one end versus the other. I think that also creates a very boring situation if everyone was perfectly in the middle all the time. It's nice to have a sort of range. But for me, I needed to move more to the right. If we're going to put rationality on the left and woo on the right. Who knows, maybe I need to move a little bit back to the other side. But I'm glad to have realized that basically my issue was I didn't realize that there was even a spectrum at all and that it was possible to move along it and to do so consciously as well.
SPENCER: Liv, thanks so much for coming on. It's a fun conversation.
LIV: Thank you. That's great.
[outro]
JOSH: A listener asks, "The internet is making the world feel smaller and smaller. Yet, most of us seem to feel lonelier and lonelier every year. Speaking for myself, I don't know any of my neighbors that live directly around me. How can we make stronger intercultural bonds individually and as a society?"
SPENCER: I wonder if you were to actually chart loneliness where you asked a ton of people every year, if it is still going up or not. I don't know the answer to that. I think one thing that is undeniably true is that a bunch of community systems are not in place for people today that used to be in place. For example, a lot of people who used to maybe have a church as the center of their community life don't have that. I think many people feel like they don't know their neighbors. They feel like there's no group that's local that they're really a part of, and feel deeply connected to. I think that is a real problem. I think that's different from loneliness per se, because while that can contribute to loneliness, loneliness can also be removed through just interpersonal connections. Strong one-on-one bonds with people we spend a lot of time with can make us feel less alone or not lonely at all. I think here, there's a couple of options. One is strengthening communities, or getting better at creating strong one-on-one bonds and spending a lot of time with people in our lives if we're feeling lonely. I tend to do the latter thing, I tend to just not even worry about having strong communities and just focus on finding people I really like, and building relationships with them. I find that this works well, but also takes you a bunch of time. You really have to make it a part of your life to seek out people you really like. If you meet someone that you think you'd really like, you have to be active about reaching out to them to spend time together. Because unless they're super eager right away, then it just might fizzle out. So if you'd like someone, reach out, ask them to hang out. Then if that goes well, that could eventually build up into a repeated relationship. That's how I think about my own life. But certainly there are many people that feel a lack of communities and I just think that we just don't have really good replacements for a lot of the committee structures that exist and I don't know where those replacements will come from.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: