Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
July 13, 2023
What is the New Enlightenment? What might it mean to improve our epistemics with regard to institutions? How should we fix imbalanced salience in contexts where misinformation is a problem (like news media)? How have the economics of institutions deteriorated? How can we continually reinvigorate systems so that they remain ungameable and resistant to runaway feedback loops? In the context of government in particular, how can we move away from "one dollar, one vote" and back towards "one person, one vote"? At what levels or layers should institutional interventions be applied? What can we do to increase trust across social differences and reduce contempt among groups? Under what conditions is it rational to feel contempt for an out-group? How can we make conflict and "dunking" less appealing, and make openmindedness and careful consideration more appealing? What is the "dismissal" economy? How can we deal with information overload? How might the adversarial economic model be used to improve academia?
Ashley Hodgson is an Associate Professor of Economics and a YouTuber. She teaches behavioral economics, digital industries, health care economics, and blockchain economics. Her YouTube channel, The New Enlightenment, explores topics related to economics, governance, and epistemics — that is, the determination of truth and validity — in a world of social media and increasing power concentration. She also has another YouTube channel with her economics lectures.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Ashley Hodgson about the new enlightenment, culture wars and the dismissal economy.
SPENCER: Ashley, welcome.
ASHLEY: Thank you for having me.
SPENCER: I happened to see two different people post about your work in the same week. And I was like," Who is this person? This is really interesting." And I started checking out your videos. In particular, I was intrigued by your work on institutions and how institutions are letting us down and what we really should hope for, for institutions. So let's start there in the conversation, because I think this is really on people's minds a lot. I think there's been a lot of loss of faith in the institutions for society. A lot of people felt disappointed about the way COVID was handled. People have felt disappointed about either the party on the Left or the party on the Right, so let's begin there. Can you tell us what is the new enlightenment?
ASHLEY: Yeah, this is an analogy that I'm playing around with when it comes to institutions. Because I think when you first start to doubt institutions, it can be a little bit intimidating and not very hopeful, and I think we live in a time when people need hope. I think framing the investigation of institutions as the new enlightenment where we're really revitalizing institutions, rather than just focusing on the failure and things falling apart, I think, is really important. But also, I think one of the cool things looking back at the Enlightenment is that there was revitalization in what I see as three different realms, which is the economic realm, the governance realm, and then the epistemic realm. Epistemic is just how we understand what's true and what's valid with the scientific revolution in the old Enlightenment.
SPENCER: That was in the 1600s, is that right?
ASHLEY: It was in the 1600s, a little bit 1700s. Some people will stretch it and bring it back before that. I cluster the scientific revolution in with the way I'm thinking, not necessarily because everyone thinks of the scientific revolution as part of the Enlightenment, but more because I'm just trying to use this as an analogy.
SPENCER: Got it. I think you're saying that in the Enlightenment that happened way back when, we had this evolution of epistemics, economic and government systems and norms. And you're saying that's what we need now. We need evolution in those three things. Is that right?
ASHLEY: Yeah, exactly. We need some updated ways of thinking about those, some ways of, I think, building in new principles, like they built in new principles back then about human rights and checks and balances on power, and rationality, and all of this, the scientific method, and we need a few things that are akin to those in the modern age.
SPENCER: Let's go through them one by one. Let's start with epistemics. First of all, not everyone knows that word. You want to define what that word is? And then what does it mean to improve our epistemics with regard to institutions?
ASHLEY: Well, epistemics is how we know what is true, how we know what is valid, how we discern legitimacy. And, of course, a lot of the epistemics of academia and journalism and these institutions that are designed mostly for generating knowledge and truth, I think that has to be the first realm where we improve them. And how can we improve them? I think people are falling back on to the old Enlightenment, the scientific way of thinking. And a lot of times, the conversations are about, what are facts that are scientifically validated? Does this study hold up to the scientific method? Whereas, the real point of contention is more about salience. I view the salience revolution — how we weigh different pieces of evidence — as the way this next epistemic revolution is going to be focused.
SPENCER: I'm not sure what you're getting at there. When you're saying it's not really about what's true, and not really about the facts, but with the salience, what do you mean by that?
ASHLEY: Well, the facts have to be true, and they have to be scientifically validated. But salience is basically the importance weights when you're trying to apply knowledge to a real problem. The scientific method is going to generate a ton of different studies that are going to uncover the complexity of that space. And a lot of times, I think, when manipulation of people's viewpoints is happening, it's happening by not actually inserting false information, but it's just lifting up one little piece of information in this salience frame so that people overweigh it. Okay, here's another concrete way of explaining salience. My friends who have gotten pregnant and had kids, when they first get pregnant, they find that suddenly there's kids everywhere, in the grocery store, and in the breweries. Everywhere they go, there's kids, and they didn't see those kids before. And I don't see kids when I go places because it's just not salient to me. But suddenly, when they're asking these questions about, "How will my life change when I have kids?" seeing those kids is suddenly salient to them. And I think, the questions we ask and the things that are important to us, along with what we perceive as being valid or invalid, that's what makes up our salience frame. Does that make sense?
SPENCER: Yeah, absolutely. I think it would help to use an example. So give me an example of a societal topic, where you feel like the salience is becoming a big issue.
ASHLEY: The one that I find the most helpful is the opioid epidemic. When you look back at what happened in the opioid epidemic, one of the big questions is, how is it possible that doctors continue to prescribe opioids to routine patients for routine hospital visits and wisdom teeth being removed? And how did that just go on for a year? Of course, there was Purdue Pharma, the pharmaceutical company that was involved in that. But it wasn't 100% that they were giving false information. It was that they were taking this one study, that was a letter to the editor, that was a study of real patients, and probably a scientifically valid study, that showed, for that group of patients, opioids did not cause them to leave the hospital addicted. And they were touting that one piece of evidence in their promotional materials and their conferences and all of that. Of course, their strategy was bigger than that. But if you look at the full body of evidence, you're going to come to the conclusion that opioids are addictive. If you let that one piece of evidence have too much weight in your salience frame, then you might come to the opposite conclusion. And I think most people would view that as wrong, but it's not wrong because the study is wrong; it's wrong because it's misplaced in the doctors' salience frame.
SPENCER: This reminds me of a recent Slate Star Codex newsletter, where Scott talks about how, if you look at the Left wing media and the Right wing media, it's not that they tend to lie — because each side will accuse the other side of lying about what's true — but it's actually much more common that they just focus on certain facts to the exclusion of other facts. They pick up on some real evidence, but then ignore all the other evidence. And so it still can give you a very biased perspective, even though it's not technically a lie. It just sounds like that's very much what you're talking about. Would you agree?
ASHLEY: Yes, that's exactly it. And if you're looking at both sides, trying to figure out, how do you mesh, and how do you come up with an accurate view of what's going on, you start to see that just all over the place, which I'm sure you have.
SPENCER: What does it look like to do better at epistemics then? Because if it's largely an issue of salience, well, is there an objective right salience? Is there a way to push towards a more balanced salience? What's your thought on that?
ASHLEY: I think it's tricky because you wouldn't want a single salience frame. You want different scientific fields to blow up their salience frame for their particular topic. You want the engineers to have a huge salience frame around how to make safe bridges, and that's not wrong, that they have way over-importance for that one thing. And the same thing, I think, with academic communities. I think it's fine for different communities to have different salience frames. But I think it's really about how you compile the different frames and how you pit them against each other, and how you organize conversations across disciplines. When I first started thinking of salience, I thought it was all having to do with importance, but I think it's not just importance. I think there are some principles to try to figure out what's a valid method for constructing a salience frame. I don't know what exactly the science of salience would look like, but I do think we need to invest more resources and more thinkers toward that direction where the questions are. They're not like, "What's true economics and what's true biology?" but rather, "How do you pull from those two different fields to converse with each other in a way that actually makes sense?"
SPENCER: It seems to me that the right salience has to do both with what your goal is, what you're trying to achieve, and with the base factor of that evidence. In terms of goals, if you're a bridge builder, then clearly making safe bridges — anything about making safe bridges — should be more salient to you because that's connected to your goal. Whereas, if you're not a bridge builder, maybe that shouldn't be salient to you because you're not going to do anything with that information. So that's the goal aspect of it. And then the second piece, the strength of the evidence, I don't know how familiar you are with Bayesian thinking and Bayes' rule. But Bayes' rule, the mathematical rule, basically tells us, if we have a certain probability assigned to a hypothesis, and then we get some evidence, Bayes' rule tells us how much we should change our mind, how much should our probability change. Although it's not always practical to quantify it for real-world evidence — it can be hard to do the right calculation — at least the calculation tells us in theory how much you change your mind, and is the theoretical guidepost to how much you should change your mind. And therefore it seems to me that the greater the base factor on a piece of evidence, the more salience we should give it, all else equal.
ASHLEY: Yeah, I really like that because I think a lot of these questions that go along with goals, if you sit down and brainstorm, you can come up with some specific measurable things that get at those goals. And once you have those measurable things, it's much easier to start bringing in more scientific structure to how you're thinking about salience. Yeah, I like that. One more thing about salience — and I think this relates more to the justice element of salience — but when we look at injustices, I think oftentimes what's happening is the person who has the power is overweighing the positive effects of their decision making and underweighing the negative effects. And I even go back to high school bullies, where someone who is bullied, for them, that bullying makes up a huge part of their salience frame, and how they feel and how much they think about it. But for the bully, if you ask them, they might even admit to occasionally making some negative comments. They'll be like, "Oh, yeah, that was a total of 20 minutes of the whole school year." So they're minimizing the salience of that. And I think that happens with leaders just naturally, that we want to focus on our good in the world, and we want to minimize the bad. I think that's another issue with salience and justice.
SPENCER: Right, that seems to relate to sort of the press secretary part of our brain that always wants to justify our actions and be able to say why, to other people, the bad things we did are not a big deal, and why the good things are especially good, and how, actually, if we're not introspective really carefully, we can even be deceived by our own press secretary and actually buy into it.
ASHLEY: Exactly. And I think the science of salience would need to account for what are the human motivations that might make a salience imbalance a little bit more likely here. In which case, maybe you need to bring in someone who would counterbalance that psychological thing onto the team of thinkers, something like that.
SPENCER: Alright, so let's go on to the next piece of this, the economic piece. What do you mean when you say that economic institutions or aspects of institutions have depreciated?
ASHLEY: That's a good question. First, let me give a systems thinking answer and then I'll give some more specific examples. The systems thinking answer is, systems work in ways where, if something's going wrong, there's mechanisms for noticing that thing that went wrong, and feedback loops that will come in and fix it. Ideally, there's a lot of different feedback loops that catch problems and that's how systems stay stable over time. But if you have the rate of generation of problems outpacing the system's ability to fix those problems through (what systems thinkers call) balancing feedback loops, then the system can start to go off the rails. And okay, that's super theoretical. But in a real world setting with systems, I think it's helpful to start thinking about metrics within systems, because metrics oftentimes determine who rises into roles of leadership, what kids get into schools, all of that stuff. And metrics can depreciate in ways we all have experience with. I'm sure there was a time when applying to college and having a bunch of extracurriculars was an accurate measure of how involved you were, how much leadership potential you had, how interested you were in the world. I do think there was a moment in history when that was a good metric. But of course, once the metric is in place, people start to game it. And I remember when I was in high school, my dad was like, "You just have to join as many clubs as possible to beef up your resume," and it was no longer meaningful. And I think there's been an update since then where college admissions will say, "It's not the number of things on your resume. It's really how involved you were, your leadership role." But you get situations where rich people will pay for their kid to become an entrepreneur in high school. And so the metrics get gamed over time, which is a depreciation of those metrics within the system.
SPENCER: This is really related to what's known as Goodhart's law. Sometimes it's expressed in the adage, "When a measure becomes a target, it ceases to be a good measure." That basically is, as soon as you decide this is how we're going to measure success or how people are performing, etc., then people figure out how to game it and it becomes a much worse measure than it used to be.
ASHLEY: Exactly. But there's other places in systems that I think are harder to observe and harder to notice that this has depreciated. I think we notice some of the things, like the fact that some of the top higher education institutions have a greater and greater share of their population from the top 1% of income earners. That's one indication that people in power are using the system to channel power to control pathways of upward mobility. And I think, because of that, a lot of the mechanisms that were once designed and once fairly well-functioning — to serve the population well, and to make sure everyone's needs were met, everyone had an opportunity for upward mobility — I think a lot of those have depreciated, just like the metrics have.
SPENCER: It seems like it's a sort of thing where you might have to keep reinvigorating them from time to time. Because if they become gamed, then what is the other option? Either you build an ungameable one, or you have resets periodically. What do you think?
ASHLEY: Yeah, I think that's the situation. You may update as you go along. But at some point, things go off the rails enough that you need a bigger system reset. That's the way I'm thinking about this at least.
SPENCER: Do you have a sense of what might have caused this? Or do you think this is just a natural thing that occurs that people in power naturally want to have their kids do well and want to succeed, and so on, and so there's just a natural pressure to do this unless some other force is preventing it or resetting things?
ASHLEY: Yeah, I think it's a combination of corporate forces, and people just naturally wanting to do good things, both for their kids, and also even for the world, but minimizing the negative effects. I think it's a combination of all of those things. The way I see economics here is, I think there's game theory going on when it comes to markets and firms and corporations, where there's a combination of collusion-like cooperation among corporations and competition, where in the setting of the market, if you have these big firms that do more colluding or more cooperating with each other, the more that happens, the worse it will be, from a monopoly sort of standpoint. It's going to be like a natural progression of power to the powerful in the corporate sector. And I think a lot of the inequalities that have gotten steeper and steeper and steeper, are interacting with this in ways where, once upon a time, markets were meant to be a little bit closer to one-person, one-vote in terms of what kind of economic infrastructure gets built, what kinds of things firms invest in, all of that. And it's moving more toward one-dollar, one-vote in a scenario where inequality is rising. When I think about the economic paradigm shift, I think it's going to have to do with game theory. And I think it's going to need to involve some sort of cooperation and collective action among the people who recognize that, right now, they don't really have the economic power to act against some of the large corporations. But if they use technology, or if we redesigned institutions — say through Web 3.0 or through just creative design for communities in a variety of ways — I think there could be ways of solving those game theory problems with tools that are coming on the scene. That sounds a little vague, and it is vague, but in some ways, I'm just starting to think about the revitalization here, and that's just the start of my thinking in the economic realm. What do you think?
SPENCER: Yeah, I guess my favorite metaphor for companies is that they're like these large lumbering robots, but they're robots made of a combination of people and bits, and also just procedures. And they're really, really powerful machines. And they generally are trying to do something like make money. That's not a perfect model for what companies do, but they're trying to do something like that. And they're imperfectly trying to do that. They're not perfectly rational, but they're trying to make money. And then if you think about it, imagine they're a bunch of giant lumbering robots trying to make money, and then it's like, "Well, what do we do about that?" Well, we want them to try to make money in ways that benefit everyone, and we want to make sure they don't make money in ways that harm everyone. And so I guess that's kind of the frame I come to this with. They're really powerful. We need to make sure that they can't make money doing harmful stuff. And ideally, we want to make sure that they can make money doing helpful stuff. And I don't know quite how that fits into your frame. But that's my approach.
ASHLEY: That's the exact frame I'm thinking of. I'm thinking of multipolar traps with these multiplayer prisoner's dilemmas where, a lot of times, there is an incentive to do the harmful thing (like you're saying), with these powerful entities, because that's how they stay in the market. I think of the social media example where you have large corporations that are creating these addictive products that are not good for mental health. And ideally, exactly like you're saying, we want to create companies or organizations or institutions that would create positive mental health versions of social media. But the problem is, if you create a positive one that isn't addictive, it'll be competed out at the market. I would call that a multipolar trap, and I think that multipolar trap problem could be solved with a creative game theory algorithm.
[promo]
SPENCER: Can you give an example, maybe even just a simplified example, where game theory can help solve a multipolar trap like this?
ASHLEY: It's one of these things where I think we're gonna have to experiment with a lot of different ideas to get one that actually works. So it's not that I think this idea would work in the form that I'm going to explain it. But I think this kind of creative idea, we need a bunch of these generated. So one is, when we look at companies where there's a huge investment in infrastructure upfront, and perhaps a low marginal cost once that infrastructure is built, and that's when the company makes its money, well, that actually represents a lot of the format of these digital industries, these big companies; most of the financial investment is upfront. And the thing that drives that decision is projecting forward how much money we will make once we put this product on the market. And of course, to do that, they're going to be thinking about how do we cater to the needs of the people who will pay for it, which is perhaps a small share of the population, and it's the share of the population that has an outsized influence on everything, including the development of this corporate infrastructure. One way I think we could rejigger that is we could set up mechanisms where investment in infrastructure was determined through, say, the one-person, one-vote scenario. And then once that infrastructure existed, you could let that infrastructure make money, but the financial return at the end of the day wouldn't be the main thing deciding what infrastructure gets developed in the first place. If you have these mechanisms, and maybe these are mechanisms that are a bunch of if-then statements — like if 80% of the population says yes, and also it's projected to make enough money, then you go for it, but f 50% of the population says no, but the richest 10% says yes, then you don't go for it — you could come up with different mechanisms just to make sure what drives investments accounts for a greater share of the population's needs. I'm not saying that would work, but I'm saying that kind of algorithm, I think, could rejigger the game theory.
SPENCER: Would something like that be deployed as an investment vehicle or as a government institution or as a new company? I'm not quite sure I understand where a mechanism like that would be tacked on.
ASHLEY: I think it could be an investment vehicle. Yeah, I think it could be a mechanism of figuring out what gets invested in, almost like Kickstarter. Kickstarter, I think, is an example of one of these game theory mechanisms where you stake money on something that you hope happens, and it might be something that's creative, like a documentary or a musical that's being put on, or something like that. And if enough people stake that money, then it happens, then the money goes there. Otherwise, you get your money back. So it would be some version of that, except with enterprises that had a corporate end.
SPENCER: It's funny, this reminds me of DAOs. Is that where you're going with this?
ASHLEY: I do think blockchain technology could be one technology that could enable some of this. I actually think there's going to need to be a social layer, like a cultural layer, to make some of this kind of stuff work. My view on Web 3.0 is that it can't be purely mechanistic. It needs to actually involve communities of people who care about each other, and social incentives. So on one hand, yes; on the other hand, I think it needs this other piece.
SPENCER: Do you want to explain DAOs for those who may not be familiar, so they understand how that relates?
ASHLEY: Let's say, you want to lend out your power tools. They're in a box, and they're locked in that box. If someone pays you $50 and also, if they stake, say, $200, to replace the power tools if they don't return them, then they get a code to unlock the power tool. So you can use that to make money. You can use that to automatically interact with the world. And it has that little mechanism of accountability where, if they don't return the power tools in good condition, you keep the full $200, all of that. DAOs are basically organizations that build in a bunch of smart contracts to try to build up an organization. So it would basically say, if you look at most insurance companies, or most online shopping centers, they're really just a bunch of contracts that are embedded in each other. And if you can automate those contracts, such that there aren't companies managing them, but rules that are built into code managing them, the idea with the DAO is to replace companies with these automated code setups. Now like I said before, I don't think you can do that because I think so much of the human dynamic is important to making institutions succeed. But I do think you can automate quite a bit of it.
SPENCER: Yeah. And just just to clarify here, the context that usually people are talking about with smart contracts in DAOs is with Web 3.0 or on the blockchain. And I think the reason that that's really pertinent here is that, if you're gonna have code that's running a contract, and the code runs it automatically — so whatever the code does is what happens — you want to be able to see that code and you want to make sure nobody changes that code. And so by putting it on the blockchain, basically, everyone can look at that contract, they can read the code, they know it can't be altered by anyone, and everyone knows what they're getting into. And then that kind of smart contract can be executed automatically under certain conditions, for example, running on the Ethereum network. And then, DAOs build on that to try to create decentralized organizations that are run on these smart contracts. Although, at the end of the day, humans have to be involved in one way or another. I think most DAOs end up having a leader or committee or something like that, even if some of it is automated. Is that right?
ASHLEY: Yeah, definitely. And at what point is it no longer a DAO, and is it something else? I think a lot of the language around Web 3.0 is a little bit murky because the space is still building out. The space is still discovering its value. But yeah.
SPENCER: I think these new governance models are really exciting, like the ideas of, what if you had an institution that had automated voting rules, and the number of votes people got was determined by smart contract and so on. Or what if you had more things powered by the Kickstarter model? I think it's very exciting, but I also have some skepticism based on having examined some of these systems. For example, Kickstarter is notorious for projects that don't get finished or take years to get finished, and way overpromise and underdeliver. It seems a lot of things that get funded there, obviously some of them are really cool and really great, but a bunch of them are sort of just hype machines, and they get a lot of funding, and they don't really deliver on the promise. But because they're designed to appeal to the masses in a way that resonates, and people want to give them money, they end up succeeding on Kickstarter. And similarly, but with different flaws, I feel like DAOs in practice have engaged in quite weird behavior. I'm not familiar with that many DAOs but of the ones I've looked at, you had one that ended up bidding on buying (what was it?) a copy of the American Constitution or something like this, at some really absurd price, because the DAO's only purpose in existence was to buy this copy of the Constitution so, okay, it's just gonna keep bidding and bidding and bidding on this thing. Or there's another DAO that I think basically most of the money was just stolen. So yeah, I think there's a lot of promise here, but I'm also just like [skeptical sound] when I actually look at these things in practice. I'm curious, am I being unfair? What do you think about this?
ASHLEY: No, I completely agree. I do have hope in the long run. And I think we are in a stage where people don't yet know what the capacities of the technology are. They can't yet fully anticipate the problems. And I think that's naturally going to cause projects to take way longer than people predict. So that would be a defense of the DAO community. But yeah, I think the DAO community in general, is building the base layer of technology that will be necessary for the kinds of institutions we need. But there needs to be a lot more robust social and value-based community on top of that, and I don't think the technology is ready to bring in the social aspect and the values aspect. But ultimately, I think the tools that are being developed through that space right now, I think they'll be essential in the future. It's just right now, yeah, they're definitely buzzwords and people don't know how to properly evaluate them, so people over-invest and get burnt, which a lot of times they'll just say, "Oh, it's the wild West," and I think there's some truth in that. I think most of the projects right now, what they're trying to do will become important, but probably not for the exact project they're developing it for.
SPENCER: Okay, so we've talked about these three areas where you see societies' institutions declining, in epistemics, economics, and then in governance, which we touched on just a moment ago. So where do we go from here? Suppose this is true, and we're declining in these three areas, what's next?
ASHLEY: I think we're gonna have to start with the epistemic layer. Because I see it as increasingly difficult going from epistemic to governance to economics, where you can't get the economic fixes without the governance fixes first, and you can't get the governance fixes without the epistemics. And part of the reason here is that a lot of the governance problems, I think, are due to the way conversation is happening online, and the distortions, and the fact that people can't understand people who are thinking differently than themselves. To some degree, I think it's going to be developing trust across difference. And part of that is going to be the epistemics around what people actually believe. Because one of the key things, I think, that's stopping these conversations from making progress is misperceptions about the other side and what they believe about different perspectives, and just contempt between the groups. And I think this growing contempt is behind some of that distrust of institutions. And if you don't solve the contempt problem, I don't think you can actually solve the rest of it.
SPENCER: Could you give an example where you feel like contempt is holding us back in society?
ASHLEY: When I look at people distrusting academia, I think there's some actual real valid reasons to distrust academia, on one hand. On the other hand, I think some of that distrust, if not a large part of that distrust, is because they hear contemptuous language coming out of academia. And this is from different groups in the population. It's from lower income people, the working class, but also marginalized groups and groups that academics often claim to speak on behalf of.
SPENCER: Ah, so you're saying that people perceive academics as being contemptuous towards these groups? Is that correct?
ASHLEY: Yeah. And I think some of that is because I do think academics are actually genuinely contemptuous toward people who disagree politically with the majority of academia. And some of it is just because I think people can perceive contempt if they feel they're being treated like their group is beneath consideration. Groups that don't have access to channels of upward mobility, if that issue isn't being addressed by academics in a way that's meaningful and that's making progress, I think that can be experienced as contempt. Academics have a lot of privilege that people see, and when groups look at that and see, 'They're not talking about my own grievances and the grievances of my group. They're talking about all the status jockeying among themselves.' I think that's often experienced as contempt.
SPENCER: So you're saying that people view academics as being contemptuous, and this causes people to dismiss academics unfairly?
ASHLEY: I think it causes them to distrust academics. And I actually make an argument that I think it's rational to distrust someone who holds you in contempt, or a group that holds you in contempt, especially if they have power, mainly because human groups and human dynamics can be creative, in the ways that they hurt people that they really, really don't like, or that they genuinely feel contempt for. And if a group has power, they may not explicitly harm your group. But if they're given a way that they can kind of under the rug, take a jab, I do think contempt can translate into not being worthy of trust. So yeah, I think oftentimes, people see academics and they perceive academics — either accurately or inaccurately, I think both are true sometimes — as being contemptuous toward their group. And that leads them to distrust academics. And I think there's a rational basis for that.
SPENCER: This reminds me of situations where, let's say someone runs a study, a scientific study on a topic, and they find a certain answer, and let's say that answer happens to match their own political views really well — let's say it's a liberal academic who discovers that conservatives are worse people than liberals, and let's say you're a conservative — I think it makes a lot of sense in that situation to be like, "Well, they were the one conducting this research. How do we know this wasn't just them justifying their own biases against my group?" and immediately dismissing it. Because research can be noisy, and because research can be biased, and because we have replication crises and so on, I think it's not unreasonable to think that, if someone has negative attitudes towards your group, that they might do research in a way that's biased against your group. I think that's sort of what you're getting at.
ASHLEY: Exactly. And I think in recent days, that will translate a lot into worries that the hatred towards us is so high that they're going to take away our rights. And I've had to do thought experiments for myself, where I was like, "If I truly believed that a share of the population was a certain degree of bad, would I take away their rights?" On one hand, the answer is no. On the other hand, I can conjure up situations where, if my mind went there where I really thought those people were true villains of the world, where I actually could see people making good arguments for taking away rights of their fellow citizens, just that thought experiment has reframed how I'm thinking about the culture wars.
SPENCER: So what do we do about this contempt? If contempt causes us to dismiss each other's opinions, do we try to reduce the contempt? If so, how do we do that? Or is there another approach to this?
ASHLEY: Yeah, and this goes back to some of these tools that I think Web 3.0 is trying to develop, where we're rejiggering incentives in institutions. And I think online institutions are a good place for experimentation. But if we could set up incentive structures inside social media, or inside the spaces of deliberation, that rewarded people who were really good at understanding people across the aisle, people who are different than themselves. And I don't think the culture wars are just Left, Right; I think there's lots of different groups. But there are people — and I think there are people in almost every camp of the culture wars — who really can understand across difference, and speak in a way that makes people feel cared for and understood. And I think if those people were the ones upregulated, and if those people gained more status, more likes, more of a voice in the conversation, I do think that could reduce the temperature quite a bit. Because a lot of, I think, what's going on with the online space is people live vicariously a little bit, through people online, people they identify with, people who get their grievances. And when someone insults that person, it's like they feel personally insulted. But I think the reverse is also true, because I know when I listen to someone who I relate to who I feel understands my grievances, and they talk to someone who is from a side I don't normally align with, but when I hear that empathy between sides and I see that, "Oh, wait, there's someone in that group who understands me," it opens up my mind way more. Yeah, so I think the first step here is creating spaces where speaking well across difference is upregulated.
[promo]
SPENCER: It seems so tricky because, on the one hand, people tend to get more attention if they're critical. Those that are saying the other side is terrible, I think, on average, just in a mimetic way, tend to beat out the person that's like, "Well, maybe both sides have a point." I wish the world weren't that way but I do think it does tend to be that way. And second of all, a lot of the people right now who people look to, who are these big names that are listened to on political issues, do tend to trash talk the other side. So it seems like we're in a bit of a cycle where it's like, "Well, how do we break out of that?"
ASHLEY: Yeah. On one hand, I think there is part of this that is natural, that people will naturally be attracted to conflict. But I also think part of it is not natural. I think part of it is the goals of the algorithms in these spaces. What are they being asked to do? My hypothesis is that they're at least in part being asked to maximize profit. And profit depends not only on clicks and likes and engagement, which is what the companies say they're telling the algorithms to optimize, and I'm sure they are. But if you can make people more responsive to ads, more likely to click and make people more susceptible to influence, then the price per click, or the price per minute of your time is going to go up. I think that creates an incentive to act on us in a way that actually changes our preferences, that makes us more anxious, more manipulatable. And when you're in that anxious state, I think you are going to be way more reactive to the negative. Whereas, I think it's a developed taste to enjoy speaking across difference. And in some ways, that's part of the goal in education, is to develop in students a taste for learning new cultures, for understanding things that they couldn't have conceived of before. And I think the algorithms could be designed to develop that taste as well. It's just that that's not profitable, so we'd need a different goal to give the algorithms.
SPENCER: It seems like you're suggesting that putting people into an anxious, threatened state causes people to buy more products, or at least click more ads. I'm a little confused why that would be true. I mean, I could see that being true for ads that are scaremongering, but it seems like a lot of ads are not scaremongering. Do you think that, for some reason, we're susceptible even to ads that don't seem to be about danger or fear?
ASHLEY: Well, I think insecurity is definitely going to make people more likely to respond to ads, because most ads meet some psychological need. And if that psychological need isn't something that you feel deeply, you're going to be less responsive. This includes the beer ad with the hot girl in the hot tub or whatever. I think that, if you can make someone more insecure about their sexuality or whatever, they are going to be more responsive to that. And I think that insecurity translates into anxiety.
SPENCER: This relates to this idea that I know you have, the dismissal economy. Do you want to talk to us about that?
ASHLEY: Yeah, the dismissal economy. It's the notion that the online space gives us way too much information. It's like information overload all the time. And it's not just that it's too much information. It's that every single piece of this information is packaged to feel maximally urgent, to get the click, to get attention. In that kind of environment, a lot of people would shut down if they really had to deal with every piece of information here. So to get through that, I think people need psychologically satisfying ways of dismissing some of the information coming up and that's not necessarily a bad thing, because everybody doesn't owe their attention to every genuine problem out there. It's okay to say, "These are the three things in the world I care about. And then everything else is important, but it's not my issue." I think that's fine. But I think what happens is people need that psychologically satisfying way of dismissing information. So they seek out commentators who will give them quick ways of dismissing. And sometimes this is like, "Oh, that person's out of touch. That person's too old. That's a biased source." We could go on and on and on just coming up with examples of ways people dismiss each other. And I think the problem happens when it becomes ingrained in our way of thinking, where we start to perceive our quick dismissal of something as a valid dismissal of that entire problem. And I think this is becoming a problem between individuals, because people have different news feeds, different things they care about. And a lot of times two people will come together in conversation, and they'll bring up something they've been thinking about. And if that's not something the other person has been thinking about, they may have this armor, this cognitive armor, for shutting that down, for saying that's not valid in some way, that's not worth thinking about. And when people encounter that, a lot of times, it just shuts down conversation between people in a way that I think can actually hurt relationships. But it also hurts our ability to perceive things accurately in the world.
SPENCER: It sounds like you're saying information overload is unpleasant. We're given these simple answers to how to reduce information overload by being told to ignore certain information coming in. People will tell us, "Oh, don't trust them, they're not reliable," etc. And then that gives us a greater sense of ease. So what's a better way to deal with information overload?
ASHLEY: The first technique is just to say, rather than dismiss something with a seemingly logical dismissal that pretends to be logical, telling yourself, "That's not in my wheelhouse," and having a real clear boundary around what are the things that you will pay attention to, that are what you're called to pay attention to and what are the things that may be important that, when you encounter that information, you're going to shut it down, but not because you think the arguments are invalid. Maybe they're invalid, and maybe they're not. But it's rather because you just don't have time to investigate. I think just clarifying the methods of dismissal could help. Now, it doesn't solve the problem, I think, between people who care about different things because, if you encounter a friend who's following a topic online, and maybe feels personally involved in the topic, if you say, "Well, that's not in my wheelhouse. I'm not going to talk about it," a lot of times for them, that's just like telling them their issue, their pain, doesn't matter. So that's a tricky balance. And it may be something where friends within a certain friend zone, you'll care about their issues enough to know a little bit or at least to listen to them. But that doesn't mean you'll research everything on the topic. The other issue with this that I've been thinking a lot about recently, is the imbalance in topics. There's a lot of topics where one side has a ton of information and a ton of research and studies. And the other side is just kind of like meh, they're not really that engaged. And that frustrates me because I usually like hearing both sides. And when I go to one side and hear weak arguments where I know there's better arguments than that, I feel like I can't assess the issue.
SPENCER: What's an example where you see that happening?
ASHLEY: A couple of examples, one is Bitcoin, I think. There's a very fervent pro-Bitcoin community. Whereas, if you want the arguments against that, they're oftentimes not very well fleshed out, or they're not really engaging with the best arguments from the Bitcoin community. And I've been thinking about this a lot recently, because I'm trying to write a chapter on Bitcoin and I'm fairly neutral on Bitcoin myself actually. I honestly don't feel like I've heard enough of the counter-Bitcoin side that really does a good job. But that also doesn't convince me that they're wrong. And another example, I think pesticides, the effects of pesticides on health. I started to think of this as, if you look at the 'pesticides are a problem' side, my guess is a lot of that is very good information, scientifically accurate and everything, and some of it's not. But the 'pesticides aren't a problem' side is not very well constructed. There aren't people out there who are super fervent about that side. And it makes it really difficult to make sense of the information available.
SPENCER: Yeah, it's a really interesting point where the level of engagement on sides of an industry is just so different. I see this around the dangers from AI, where people who think AI is gonna be really dangerous tend to be really fervent about it, and will engage a lot and give you a lot of arguments about it. People who are like, "AI is not that dangerous," are not that engaged and they don't go that deep in their arguments, and they don't seem as well thought out, which doesn't necessarily mean they're wrong, but it just means that it's harder to get the best arguments on that side of issue.
ASHLEY: That's a really good example. Yeah, I feel the same thing with ChatGPT. There's an imbalance in sides, which makes me feel like I can't accurately assess.
SPENCER: Final topic before we wrap up, you have this idea of the adversarial economic model and how maybe that can be used to improve academia, improve truth seeking. Why don't you tell us a little bit about that?
ASHLEY: Yeah. And this is one that I'm very much experimenting with. And I definitely wouldn't want this to be all of academia. But I think academia could play around with a model that's like the model in legal courts, where there's one side arguing intentionally on that side, another side intentionally constructing the best arguments on the other side. And there's a judge in between whose job it is to sort through, from a neutral standpoint, which of the two sides wins the argument. And I've been thinking about this around Bitcoin as well because of this chapter I'm writing. One of the questions some of the Bitcoin community is asking is, "Why haven't academic economists taken up Bitcoin? And why aren't they teaching about Bitcoin? Why aren't they doing more research on Bitcoin?" And as a result, they're asking the question, "Should we fund an academic institute on Bitcoin?" And I've been asking, "Would I trust such an institute?" And if the institute were just generating pro-Bitcoin content, I think the answer would be no. I wouldn't really trust it, but I'd be interested in it. It's just, I wouldn't incorporate it in my courses because I would perceive that bias based on the funding, which I think has some ironies, because some of the things Bitcoin touts as a community, are against that corporate sponsorship of academic information. But if they funded equally — say, three Bitcoin people and three anti-Bitcoin people, and also perhaps funded maybe three people in the middle whose job it was to develop a set of techniques for arbitration, and maybe those techniques would have to be techniques that both sides would agree that those are valid methods of arbitrating, valid methods of comparing the two sides — if there were something like that, I think I would trust it a lot more. But I also think we need something like this for a lot of these imbalanced issues. We need a way for people who don't have a stake in the game, who aren't committed to one side, to come in and see the two sides accurately. And I think ultimately, that would be in the best interest of many of these causes that are one-sided causes, because it would give them an avenue for actually going up against the best case of the other side. That's how scientific methods work; they work through attempts to disprove. And I think a lot of these fervent, passionate people in these topics, just don't have enough people meaningfully attempting to disprove for it to be valid scientifically.
SPENCER: Yeah, I sometimes long for the neutral third party that's going to get in the middle and take information from both sides and evaluate it fairly and make sure the two sides talk to each other and hash out their disagreements. But I generally feel like this doesn't exist, or it exists to a shockingly small degree. Some people might say though, that academia is already engaging in an adversarial model that basically, you submit your paper to a journal, and then there are these anonymous peer reviewers who are reviewing it, and that's kind of an adversarial process. They can reject it if they don't think it's good work, or they can make you change it if they think it needs to be improved. And then, okay, let's say you get published in an academic journal, other academics might write other papers saying your theory is wrong and their theory is right. So how does that really differ from what's already happening?
ASHLEY: Oh, yeah, I don't think that's actually what's really happening with the peer review process. I think one of the issues here is that academia is a community of people. And to move up in that community, you have to validate the ideas of people with the power to give you tenure, to give you leadership positions in the community. So there do tend to be people bowing down to each other's ideas, giving more credence to each other's ideas, mainly because that is how you move up in the community. And I'm not saying that's the only thing that's going on. But I think that social dynamic is an intention with the kind of trying to invalidate that would make good scientific research. I don't think there's actually a super strong incentive to try really hard to invalidate a paper you're peer reviewing, and we need stronger incentives for that. I think adversarial models could help create actual incentives for people to try to invalidate each other's theories.
SPENCER: I'm a little confused about that, because I guess it's field-dependent but, in many fields, the reviewer is anonymous. So I don't see why they would have any incentive to kiss up to the person they're reviewing or say that their theory is correct. And honestly, my experience is that peer reviewers are pretty brutal. They often make really scathing critiques — often false critiques, not very good critiques — but they can be scathing and jerky about it. Obviously not all of them, plenty of them are nice. But is that different from your experience? Do you feel like people are really not saying their critiques when they're peer reviewers?
ASHLEY: I think you get scathing critiques as peer reviewers, but I don't think they're incentivized from the right place. I think they're incentivized from a place of like, "I have to put my mark on this paper through my peer review role here." And they'll come up with something a little bit more to come up with something, than because they're really trying hard to get at the right critique.
SPENCER: I see. So it's more about showing that they put in time as a peer reviewer and impressing the editor or something like that?
ASHLEY: Yeah, exactly. The thing I think they haven't a disincentive for, is actually thinking things that are too out of line with our community. And as a peer reviewer, they're not going to go against the thought processes that they need to have, to be in that community so their critiques tend to be not the critiques that would socially isolate them in the community. They tend to be the critiques that they can have within that community and still stay in the community, which are oftentimes really petty, off-the-mark critiques.
SPENCER: I see. It's not critiquing the core of what's going on. You're allowed to critique some side issue that doesn't threaten anyone too much, or something like that?
ASHLEY: Yeah.
SPENCER: I sometimes wonder about this in social science because, for example, there was this whole field, social priming, where they would find things like, if you're holding a warm cup of tea, you tend to rate people as being warm, or act in a warm way, or whatever. Or if you're primed with words related to old people, you'll walk slower. And then after hundreds of papers being published on this topic by not just a few researchers, but many different researchers, now there's this disastrous rate of replication, where almost all or maybe even all of these findings that people have attempted to replicate, have failed. That's pretty weird. How do you get to a position where there are hundreds of papers on something that doesn't seem to exist? Surely, there must have been a lot of people who realized that these findings didn't replicate, and now, with the replication crisis happening, people are moving towards thinking these results probably don't hold up. But what was happening for years and years and years and years when this stuff was being published?
ASHLEY: That's a really good example. Yeah, I think it is this social dynamic thing where, if you're in a community where it's cool to believe that, and I think it's possible with priming that there's something that people are intuiting in situations that they were trying to get at with these studies, but the studies were weird, and it was constructed in a weird way. I think it's still possible there's something underneath the priming that was not priming per se, but something else that's true. Maybe, or maybe not. But yeah, I think the whole priming business was partly driven by the social dynamic within the field.
SPENCER: Well, on the truth of priming question, there's an important distinction, I think, between social priming and word priming or semantic priming (I think sometimes it's called?) where I think the word priming studies actually do tend to hold up. And that's where you basically prime someone with a word like (I don't know) 'laundry,' then later, it might be easier for them to think of a word like 'soap' or something like that that's related. Pulling up one concept does sort of elevate the probability of other concepts being accessible. But it's more like the social priming, where you're getting these dramatic changes in behavior due to this really subtle priming, that seems to not be replicating.
ASHLEY: Yeah.
SPENCER: A final question for you, Ashley. What do you want listeners to think about coming away from this conversation?
ASHLEY: I think my hope would be that, when they're encountering some of these really difficult things they're seeing in the world with mistrust in institutions and institutional capture and all of that, instead of focusing on the problem, focus on what the solutions could be and what those could look like, because I think we're going to need a lot of people generating a lot of goofy ideas before we get some that stick. But I just don't see enough people experimenting, going out on a limb and trying to think of ways we could restructure our thinking around institutions.
SPENCER: Ashley, thanks so much for coming on.
ASHLEY: Thank you.
[outro]
JOSH: A listener asks: Do you ever think it'll be possible in practice to emulate or upload the mind of a specific individual? And if so, pick a year when you think that's likely to happen.
SPENCER: I think there are a few different issues here. One is technological. At what point could you actually copy the information out of a mind and simulate that mind on a computer? If civilization exists for long enough, if humans can survive for millions and millions and millions of years, and civilization is thriving, and technology continues to develop, it doesn't seem shocking that one day we would be able to do something like that. That's the technical question. Then there's a philosophical question, which is that, if you were to do that, if you were to try to extract that information from someone's mind, and then run a simulation of that person on the computer, is it really the same person? And one way we could frame this is, suppose that you knew that either you were going to get tortured, or a digital simulation of you was gonna get tortured, based on extracting information from your mind. If you're selfish, will you be indifferent to those two scenarios? In other words, would you anticipate that you, whatever you are, would get to experience the suffering of the torture, regardless of whether you're yourself or if you're uploaded? And I think this is a really difficult philosophical problem coming from philosophy of mind, and I don't think we really have adequate answers to it.
JOSH: Yeah, I suppose if we're already living in a simulation, then uploading or copying or emulating our minds would just be a simulation within a simulation, and then maybe it wouldn't be too hard to imagine.
SPENCER: Yeah, if the simulation hypothesis is true, and already there's some giant supercomputer that we're running on without realizing it — Matrix style in some sense — then, yeah, if we're already running on a computer, then copying minds should be a lot easier.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: