Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
January 9, 2025
What kinds of things really distort our ability to think clearly when making decisions? What is "psychological distance"? What is construal level theory? How can we intentionally increase or decrease psychological distance for ourselves or others who are making decisions? What are "decisionscapes"? When giving toddlers choices, we often artificially limit the number of available options to help smooth out the decision process and avoid decision paralysis. When might the imposition of this kind of artificial limitation be useful for adult decision-makers? What should we do with the productivity gains we've reaped (and will presumably continue to reap) from AI? Is it possible to show someone that you really care about them without making any kind of sacrifice? What has AI done to the value of art? Which individuals and companies currently own the means of digital production? How can we break free from algorithms that drive engagement by triggering negative emotions and promoting conflict? Is survivorship bias the ultimate cognitive bias? What are some lesser-known or lesser-used framing devices for making better decisions?
Elspeth Kirkman is the Chief Programmes Officer at Nesta, overseeing missions in early childhood development, obesity reduction, and net-zero emissions. She previously held senior roles at BIT, including establishing the company's North American office. Elspeth is the author of two books: Behavioral Insights (2020), co-written with Michael Hallsworth, and Decisionscape: How Thinking Like an Artist Can Improve Our Decision-Making (2024). Follow her on TikTok at @Karminker or on Bluesky at @karminker.bsky.social.
Further reading
SPENCER: Elspeth, welcome.
ELSPETH: Thank you very much for having me.
SPENCER: In your book Decisionscape, you talk about different forces that affect our decision-making. What do you think is one of the most powerful forces that distorts the way we make decisions?
ELSPETH: When I started writing the book, it was originally going to focus purely on the concepts of psychological distance. So, psychological distance is definitely my answer here. When I say psychological distance, I basically mean the kind of gap between us as we are, where we are right now, and either other people, future events, or things that are happening very far away, for instance. Psychological distance has a really big impact on how we make decisions. When things are psychologically distant, maybe they happen far in the future, or they happen to somebody else, we just don't really care about them that much. We don't care about them in the same way that we would if they were up in our faces and happening to people we know and love right now, where we are. It really warps a lot of our decision-making and behavior in ways that can be really good, because it's protective. If we felt compassion for all people equally, we'd never get out of bed in the morning. But it can also be really bad, because it means that we kind of neglect things that really matter because they don't happen to be instantly relevant to us.
SPENCER: Is psychological distance the same as construal level theory? Or is that different?
ELSPETH: They're really closely related. Construal level theory is linked to psychological distance because typically, things that are more psychologically proximal have what we call a lower level of construal, so we think about them in very specific, super granular detail. The act of thinking about them in a detailed way makes us care about them and literally sweat the details, whereas things that are psychologically distant tend to have a higher construal level, so they're more abstract, not thought through in any great level of detail. That means that sometimes you want to manipulate psychological distance and you want to manipulate construal levels. If you're trying to do a visioning meeting at work, for example, where you're thinking about, "What should we be doing in the next 10 years? What are some really creative, different directions that we could take the company in?" You really don't want people operating at a low construal level, where they're thinking about Gantt charts and who's going to work in what role and whether the policies are going to be fit for purpose. You want them thinking in a much more loose and abstract way, whereas when you get down to, "We've got the vision now. We need to plan how it plays out." You want people back in that low construal level where they're really thinking about how to connect it to reality.
SPENCER: What's an example where psychological distance makes a big difference in people's decisions, and they might be better off doing it in a different way or adjusting the psychological distance?
ELSPETH: I'm going to tell you an example that I use in the book, which is not a real example, but it's one that I think is really amazing and really kind of brings it to life. It was dreamed up in the 1980s by a professor at Harvard, Roger Fisher. He was thinking about nuclear war, essentially, and the terrifying idea that really all rests on the decision of a few very powerful people. If the President of the United States decides, "Yeah, we're going to press the figurative button and launch a nuke," then everybody else's life and everything they know is kind of turned upside down because of that. He had this thought about it, which could be seen as a benefit, but it could also be seen as an issue. One of the features of how the decision-making process is designed is that psychological distance is really deliberately built into it. The President is very far away from the place where they might make a decision to send a nuclear warhead. They execute the process through quite a convoluted series of steps: you access the codes, then you make a phone call to instruct someone to instruct someone to instruct someone to press the launch button, and then someone very far away launches this nuke, killing a bunch of people that you've never had to meet or deal with. The whole thing hinges on psychological distance galore. He had this awful thought experiment: what if you changed it? Say that when someone is sworn in as President, they have to select a very close personal aide, and that person has the nuclear codes sewn in a cavity behind their heart. In order to access those codes and initiate a nuclear strike, the President themselves has to cut out the codes and cut out their heart in the Oval Office. It's obviously very extreme, but it really does change the equation quite a lot if the first drop of blood that gets shed is done directly by the President. You suddenly go from, it shouldn't make sense from a rational, economic perspective; if you're willing to kill thousands of faceless people across the nation, you really shouldn't mind about killing one in your office. But you can see how that would change the calculus quite significantly.
SPENCER: This reminds me of trolley problems, where they'll ask, "Okay, imagine a train going down the track and it's going to run over three people, but you could pull a switch to get it to run over one person." They do these very abstract questions in surveys and find certain results. But I always imagined that if you were actually there on the track and saw a train speeding, it might completely change the way you think about the problem, which is really hard to capture in a survey.
ELSPETH: There's a — I'm totally blanking on the name — YouTuber who basically faked real-life trolley problems. It's like, "I'm just going to pop out for a cup of tea," and there's somebody in a kind of station control office, and they're like, "Don't touch that switch because it's going to move the train track." And, yeah, people don't pull the lever.
SPENCER: Yeah. You can imagine all the doubt that might creep in, the anxiety, the adrenaline, and how that might completely change things, whereas, if it's just an abstract, far away thing, when you're kind of planning something from afar, you might make a different decision. Now, I'm sure if the President of the United States was trying to decide whether to send the nukes, there probably would be a lot of that emotionality that comes with actually killing someone, but it's still not the same. It's so, so abstract, in a way.
ELSPETH: Yeah, completely. The other thing I think is really interesting about trolley problems is that, and maybe this is another reason people don't pull the lever, is that there is no morally correct answer. This is the point of the trolley problem. But let's say that you think the kind of utilitarian approach of pulling the lever to make the net loss of life smaller is the right thing to do, just because you think that's morally correct. It's probably not legally the thing that you should do, because you can probably get done for murder if you pull the lever, whereas you're completely absolved if you don't touch it.
SPENCER: Yeah, that's a good point. So, it's just another way in which the abstraction may not match the real-world considerations. But when you're doing the survey, you're probably not thinking about that, "Oh, I'm gonna go to jail." So how does this apply at the sort of individual decision-making level when we're going about our daily lives?
ELSPETH: I think in lots of ways, there might be sort of one-off decisions you make, like if you're thinking about what charity you might donate to, for example — I'm not part of the Effective Altruism movement by any stretch of the imagination — but I think the notion that we probably donate inefficiently is an interesting and important one, where people will give to hyper-local causes that are meaningful to them because they've seen and felt the effects of the work when maybe there are sort of globally important causes. Somebody may donate to a local hospice or a charity like that, which is a really good thing to give money to, but they would probably do more good in the world if they donated to somewhere providing malaria nets, for example. But they wouldn't do that because the psychological distance is too great to feel the effect of the malaria nets charity.
SPENCER: There are some really interesting studies suggesting that in advertising for charities, sometimes a single child's face looking really sad might be more impactful than just statistics about how big a problem this is or how effective the charity is.
ELSPETH: The identifiable victim effect is pretty powerful in this context. Sometimes, that's about collapsing the psychological distance because you're reducing that level of construal. You're giving the person a face, a name, and an identity and telling their story. But I think there's also something about the idea of a story. Nobody ever got compelled by a statistic. A story is much more attractive to people.
SPENCER: What does the word construal mean in construal level theory?
ELSPETH: That's a really good question. Can I think of a good definition?
SPENCER: Oh, sure. I'm just curious. Also, I don't have to ask that. But I'm just curious myself.
ELSPETH: I think it's kind of about processing, I guess. It's how you sort of make sense of the situation. Are you thinking through it in a very bottom-up, kind of granular way, where you're filling in every bit of detail and thinking everything through, kind of four steps ahead? That would be a low level of construal, and then the kind of higher level is that you're thinking and interpreting and making sense of the situation in much more of a kind of top-down way. I'm talking about these in kind of lay terms, this bottom-up and top-down, rather than anything to do with how we actually process it in a kind of psychological sense. But, yeah, it's about concreteness and subtraction, I suppose.
SPENCER: When is psychological distance actually useful in decisions? When does it help us make better decisions?
ELSPETH: I think it helps us make better decisions when we might be overwhelmed by options. So if your brain is essentially automatically eliminating, or at least down-weighing heavily anything that's happening to somebody further away, happening far in the future, it just means that you may have fewer choices in the set that you have to really seriously consider. So I think that can be really good. I think also, even though I've said that we should think more globally when it comes to things like charitable giving, everything falls apart if you don't pay attention to the fabric of the society that you're immediately part of, and if you don't invest much more heavily in the relationships that you already have with the people that you know and love than with relationships with people further away. So I think it is very natural, there's a reason that this exists. It's a kind of very naturally protective factor that keeps societies tightly bonded.
SPENCER: Do you think we should use psychological distance as a tool? Think about, "Oh, hey, I'm making a decision now, and actually somehow increase the psychological distance on purpose when it's appropriate."
ELSPETH: Absolutely, yes. I think there are so many decisions that we make where we become completely overwhelmed by the immediate aspects of that decision. If we think about, "What it's going to look like in six months? What are we going to care about then?" For example, it's completely different. So let's imagine I really like my boss that I work for, but I get offered a new job somewhere else. I might have all kinds of things, and some of these aren't psychologically distant, some of them are. But I might be thinking, "I really want to accept this new job, but I don't want to let my boss down. He's just given me a promotion. He's kind of come out to bat for me loads of times. I just can't imagine having that conversation. It feels too difficult and immediate. I'm just going to say no to the new job." Or maybe, more realistically, you might think, "Oh, goodness, what if I really oversold myself in the interview? What if I'm not going to be any good at this new job?" Because you don't know the new job in a lot of detail, but you do know your current job in a lot of detail, you can kind of lose confidence and think of lots of ways that you could be absolutely awful at it in the abstract without having any concrete sense of what you'd actually be doing day to day. That's the kind of thing that if you just bit the bullet and went into the new job in six months, you'd think it's really funny that I was so worried about that when actually it's just emailing Gary all day long, or whatever the very detailed version of the job looks like.
SPENCER: A technique I think could be useful in decision making is to ask the question, "What would I tell a friend that was going through this?" That seems to me a technique for creating psychological distance, even though we may not think of it that way.
ELSPETH: Yeah, I completely agree because you're basically stepping outside of your own perspective. Psychological distance and perspective are really clearly linked because the distance is relative to you, where you're standing right now. I think it's really interesting that when you're grieving, for example, or extremely stressed about something, we often use expressions like, "I'm beside myself," where you've gotten to a point where you've literally had to step outside and dissociate from your own identity because things have gotten so awful. Even just the language of perspective, we use it metaphorically in ways that we don't necessarily recognize. We'll say, "Oh, I'm going to zoom in on. I really want to zoom in on this." I think we actually use the language of perspective or perspective metaphors really frequently in our vocabulary in a way that we don't necessarily realize. We talk about things like we say, get things in perspective, or you've blown it out of proportion, or we need to zoom in on that, or see the big picture. Even the idea of centering or foregrounding particular aspects of something, we're basically just talking about perspective and what we're paying attention to in quite a metaphorical way when we do that.
SPENCER: It seems like this relates to your idea of "decisionscapes". What is a "decisionscape"?
ELSPETH: A "decisionscape" is, again, a metaphor, and it's this idea of, if we thought about the way that we make decisions as though we were an artist constructing a canvas, then we would probably be a lot more deliberate about certain aspects of this decision than we are when we just let it all run automatically. We would think about, "What do we foreground? What goes in the background? How are things sized relative to each other? Where's the eye drawn to? What's the overall composition? How do these things hang together? What is the viewpoint? Where should we stand in order to look at this? What am I choosing to include and exclude from the frame of this picture?" And all of these other questions. I think we probably can't make decisions in an extremely deliberate way like that, but certainly when we recognize that we are in a decision-making process, like "Which house should I move into? Or should I marry this person, or should I take this job?" I think you often can afford to take a moment and make sure that the things that are looming large in the foreground really are the things that matter most to you, that align with your values, that you want to be paying the most attention to, and that the things that don't do that are kind of faded away into the background and pushed beyond the vanishing point a little bit.
SPENCER: It seems to me that it's easy to have this kind of naive perspective that what's happening when we make a decision is there's an objective set of facts, and we're processing the facts and saying what to do, whereas, in reality, the framing of those facts matters tremendously. There's interesting research showing that, for example, if you frame something as a gain versus a loss, people might make a different decision. Even though the information is identical, it's just the way it's being talked about is different. Would you say that's related to what you're describing here?
ELSPETH: Yeah, absolutely. The loss aversion thing is really interesting. For example, if a patient is making a decision about whether to get a life-saving surgery or not, if you frame it and say there's an 80% chance it's going to be successful, people are going to be more likely to do the surgery than if you tell them there's a 20% chance that they will die on the operating table. Obviously, you shouldn't be gambling on a 20% chance of dying on the operating table, unless the cancer means that you're probably going to die anyway. But people do make different decisions depending on which bit you draw attention to. That is exactly the kind of thing that I mean. Are you featuring the right side of things? Are you looking at it from the right angle? Have you got the important bits in the part where the eyes are drawn in the picture, or are they kind of squirreled off to the side somewhere?
SPENCER: It reminds me of a Daniel Kahneman quote, "Nothing in life is as important as you think it is when you're thinking about it." This idea is that when you bring something to your attention, suddenly it's important. It seems to get magnified just by the fact that it's in the focus of your attention.
ELSPETH: Absolutely. And yes, the sort of "all you see is all there is" kind of mantra as well, where when you're looking at something, nothing else exists. I very much agree with that.
SPENCER: Could you walk us through an example decision and how the idea of "decisionscape" might affect it?
ELSPETH: Yeah, so let's say that you keep it on the job theme. Let's say that you get offered a massive promotion. Maybe this is a fantasy scenario that many people will never encounter, and the promotion comes with a huge pay rise, but it also requires that you move a thousand miles away, and you have to consider this overnight and come back with an answer. I think that's the kind of thing where you could very easily get seduced by or put off by the wrong details. The seductive version of it is that all you can think about is the paycheck. You're like, "Wow, this is amazing. A thousand miles isn't that far. If I'm paid that much, I'm just about to hop on a plane back and forth if I want to go home anyway, no problem." You don't think about the things that you actually want to prioritize and value in your life. Instead, if you were to say, "Right, let me sit down and think about what I want, that this job is at least a five-year prospect. What do I want the next five years to look like?" You realize that you care a lot about being close to aging family members. You care a lot about having leisure time and being able to see your friends. If those sorts of things are important to you, and you bring them into the foreground, you start thinking about whether the pay is enough to actually sacrifice those things. You might end up with a very different answer than if you just accept the picture as it's presenting itself to you, where you've got the big dollar signs flashing up and essentially blotting out everything else. There might be other aspects of it as well. If you think about the element of the "decisionscape", which is that only some things are in the frame, there are going to be a lot of things that aren't actually in the picture. Often, the way that you get opportunities presented to you is pre-curated. You might think that the only available options are to say yes or no, because that's how it's been presented to you. But it could be possible that there's something outside of the picture. As you're currently looking at it, you might think, "Could I not do a hybrid version of this, where I don't have to move a thousand miles, but I'll agree to do fly-outs every three weeks or something?" It's really just helping you think about which bits you really want to preserve in this. "What do I want? What do I want the end picture to look like? And how do I make sure that I'm not missing something just because of the information that's being presented to me? Is there not a kind of third way?"
SPENCER: So is it fair to say that reality plus your own psychological orientation could present a default framing? Something about the job offer is given to you, plus the way you tend to think of other things, gives you a default framing. And then you're suggesting people go deeper and say, "Okay, here's the framing I was given. But maybe there's a more helpful framing. It's not the default one that I want to construct on purpose that will actually help me make this decision more effectively."
ELSPETH: Exactly. And I often think of it as sort of toddlers' choice. I'm sure it's got a proper name, but it's like, if you want your toddler to do something, you don't say, "Do you want to eat your dinner now?" You say, "Do you want sauce with the pasta or no sauce?" You constrain the choice to, "Look, the default here is you're eating the pasta, but I'm giving you a choice around whether you want sauce on it or not." — My children will never choose the sauce, for what it's worth — It doesn't always work, but I think we imagine that's the kind of thing that stops when you age out of preschool. But actually, an awful lot of the world that's presented to us is some form of toddlers' choice, where we're just not questioning the kind of bigger choices that are available.
[promo]
SPENCER: It seems that the world is kind of giving us toddlers' choices sometimes. We may have grown up in a culture where it's expected that we do X, Y, Z, which could be going to college, or maybe it's expected that we immediately start working instead of going to college, or whatever it is. Essentially, it's like the world, or our culture around us, has already narrowed the choice set to a very small choice set, and then we feel like we're choosing by picking between these pre-described, acceptable options.
ELSPETH: I totally agree. I think perhaps sometimes they're not even options. They're just whatever the logic of the system that you live in is, gets solved for in every single opportunity that comes up. Something else that I've flagged that we might talk about was around productivity and artificial intelligence and how maybe we're thinking about it wrong. I think that's a really good example here, where I just feel like, "AI has kind of come out of nowhere for most of us, in terms of just how powerful and impressive and how much real-world application it has." It feels like even at a government level, let alone at an individual firm or individual worker level, all of the conversation is, "What can we do better, faster?" But there's very little discussion of, "What should we do with the productivity gains that we're going to make?" It just feels like the status quo is going to reinvest the money back into companies that are already very rich. Use the time saved to get people to be more productive. Don't reduce anybody's working hours, just do more with the same amount of resources. It just feels like we're really missing a trick, because we have this at least once in a generation opportunity to think about, "What do we want? Couldn't we just buy more leisure time?" Couldn't we say, "We don't want to aim to be the most productive economy in the whole world?" We want to aim to be the one that people get the most joy, value, and benefit from, because we buy people's time. We invest the money that we're making back into the people in society. But it just feels like we're locked in this logic of, "Well, this is what we've always solved for, so we should just keep solving for it, when actually we've got quite a game-changing opportunity on our hands."
SPENCER: Will that end up being a values judgment that essentially, if you value achievement, if you value working hard, you'll go one way, whereas, if you value enjoying life, finding meaning outside of work, connecting with family, etc., you might go a different way. Is there really a right answer to that question?
ELSPETH: I think it should be a value judgment. Absolutely. I think it's probably impossible that that happens on an individual level, just because we won't be able to say, "Oh, I'm personally opting out of working hard, and I'll just reap the benefits," because it probably won't work like that. But certainly on a national level, I think countries should be saying, "Do you know what? This is the kind of place that we want to be. This is our kind of national identity, and that national identity might be hyper productivity, and everybody works extremely hard for absolutely jaw-dropping gains, or it might be that we're a place of leisure and we're about enjoying life." But I guess my point is that I don't think we are going to deliberately make that choice. I think a lot of countries are just going to sleepwalk into the let's make big, rich organizations richer and make people work just as hard as they're working at the moment, without giving them any of the time and benefit back.
SPENCER: It certainly seems that the US would be all about putting it back into productivity culturally. Do you think that's true of Europe as well?
ELSPETH: Yeah, that's probably true for the UK. I don't know; there are probably other— I mean, there are big differences within Europe. And it could be really interesting— there are already many wedges in intra-European politics, but it could be another pretty big one. I imagine some countries would choose leisure time over other things. Many countries have defended leisure time in different ways. Time use studies are quite different across areas of Europe, so I think there would be quite a lot of variation.
SPENCER: Certainly, Americans tend to be shocked by certain European norms. Put it that way, just from the point of view of how hard Americans feel they should work, or are expected to work, or maybe they feel they have to work. And they look at, for example, France, where people tend to take a lot of time off in the summer. That's like, "Whoa. I can't believe they do that."
ELSPETH: Yeah, my favorite thing about French summers is that they have to randomize which bakers can go on holiday when. So you basically just get a letter saying you can go for July, or you can go for August, or whatever it is. Because the bakeries must stay open and they can't have all the bakers off at the same time.
SPENCER: It would have been a perfect opportunity to run some kind of baking randomized control trial, though, I imagine.
ELSPETH: I'm always thinking, what's the outcome that this is essentially like a natural experiment on? But I'm really not sure. Maybe it's business revenues.
SPENCER: But you would say the UK is likely to follow that road, just reflexively, without thinking about it, going down the sort of, "Let's dump everything back into productivity path."
ELSPETH: That's how it feels to me at the moment. It just feels like the conversation is really short-sighted. Rightly, there's a lot of celebrating of, "Isn't this amazing? It's going to be able to read medical scans more effectively than sometimes human judgment can, or this is brilliant. It's going to automate a lot of business processes." But it just feels very myopic, very low construal level in some ways.
SPENCER: So what would you like to see happen? What would an appropriate process be for this?
ELSPETH: It would be for the government to lay out — this is very lofty, isn't it? Calling on the government to do something — but I think it's probably for the government to work with people across society to figure out what is that value judgment that we'd want to make? What would we want to get in return for these amazing productivity gains that, in theory, we're going to get from artificial intelligence? And also, how are we going to — this is sort of launching into a slightly different issue — make sure that it's not just things that used to be skills that individual people could sell, that are now being produced by large language models and put in the hands of wealthy people without any of the wealth going back to the skilled people? I read something that I thought was an excellent quote. But how are we going to make sure that people are rewarded for the work that they do when the line between them doing it and the work happening becomes blurrier and blurrier as these models get trained more effectively? So it's basically just these really big questions about what it means to be a worker who should get the benefit. What is intellectual property? Figuring out what the sort of social pulse on those things is, what do people really think, what kind of country they want us to be? And then forming a sort of strategy, a set of policies around that.
SPENCER: It seems part of it is redistribution. How much should get redistributed from top companies, let's say, AI companies in particular? Another part of it seems to be about ownership. If you're an artist and you make art, and then the large language model copies your style, should you get a cut of that?
ELSPETH: Yeah, I think so. I'll give you a really specific example. This is probably not something that's going to happen immediately, but it's not totally unimaginable. Let's say there is a company that is full of analysts, for example, and all the analysts are quite fungible. They do similar types of jobs. They can be moved from project to project. It doesn't really matter who you get; they're all trained the same way. Suddenly, a lot of that work can be automated, but not just that work. You can say, "Okay, this Zoom call is kind of low stakes, and it's really just quite procedural. So I'm going to send my avatar that's trained on all of my previous Zoom calls, my email history, my Slack messages, and whatever, and it's going to look like me and act like me and make the same sorts of decisions as me, and it's going to log all this information." Then suddenly, you can attend seven Zoom calls at once. A lot of your work is happening a lot faster. There's a version of that where we have the same number of analysts employed; you're just doing much, much more work. There's a version of it where there's a cap to how much work you could possibly produce. You say, "Right, we used to need ten analysts, and now we only need one." There's a version of it where you say, "Okay, what if we cared about giving people their time back? Let's not make everybody work as hard, but we'll still have the same kind of output and gains. People can spend half their week going to the gym, playing with their kids, doing all these other things while their avatar is busy in their Zoom meeting." That's quite an extreme sort of characterization of it, but it outlines some of the choices around where you think the diminishing returns are on the various different margins, whether it's producing more work, sliding down the workforce, changing the unemployment ratio, or leisure time. I feel like we're not being very deliberate about those things, and they're actually not that far off. That example might be extreme, but they're not a million miles away.
SPENCER: I wonder if the actual levers to pull resemble what is described in that thought experiment. Imagine, as you say, you're creating AI versions of a particular person. It's easy to think about, "Oh you can remunerate that person for those AI copies doing that work because essentially, they're copies of the person." But it seems much more likely that what you'd end up with in such a scenario is not a copy of a particular consultant or whatever, but some kind of fine-tuned model designed to do consulting work in general that's not tied to any particular person. Then the company is just like, "Okay, great, we'll just replace the consultants with these AI models because they're a thousand times cheaper. By the way, we can spin up millions of them if we want to automatically scale to any business demands we have, and they're just better in every way. But there's no particular reason they would even think about remunerating that person for that model."
ELSPETH: Completely, and then you just have a completely different issue of unemployment. I do think one of the things I find fascinating about AI is that a lot of people in the types of jobs we're talking about didn't have a lot of sympathy when cashiers in supermarkets got replaced by computerized checkouts, for example, or other versions of automating jobs. No one thought it was a psychologically distant issue. Nobody thought they were coming for their job; they felt like other people's problems were quite removed. Then suddenly, it turns out those are the jobs that are going to be coming for next, and everybody's interested in it. I think the other thing, which is much more existential and a little less likely to happen, is that if the version I talked about did happen where suddenly big aspects of my job can be done by my avatar because I've got my email history, my Slack history, and my Zoom history, that doesn't really last that long. I think about my children's generation, and I wonder, do they even get the chance to form a personality and a specific way of doing things? If everything they interact with is a model trained on all this data we've got on the internet, how do they ever not become these robot versions of people that are just slightly bland and generic? I'm hoping I'm not talking about my kids' future, but not these slightly bland, generic beings, particularly if you end up with brain-computer interfaces where you have some kind of, what's the next obvious thing to say? You're in a conversation, slightly stalling, and you think, "God, this is awkward," and you can just call on this large language model in your head to be like, the next obvious thing to say would be this. How often do they start calling on that to the point that they don't really know where they stop and the computer starts?
SPENCER: Are you imagining a situation where you can kind of instantly get that just by thinking about it?
ELSPETH: Yeah, it's probably a little bit extreme, but there are maybe versions of it where, let's say you get a really good version of Apple Vision, or Google Glass, or something. This is something I think about a lot as well, the kind of relational economy. So let's say I have some special glasses that have a little camera in them. When somebody I haven't seen for a while walks towards me in the supermarket, the glasses read to a little thing in my ear, recognize, and a little thing gets sent into my ear that says, "Oh, that's so-and-so, last time you saw them was four years ago. They just had a baby whose middle name is this, and they weighed this much when they were born. They'll be starting school next week. Ask them if they're nervous about starting primary school." That's amazing, because you're never going to be blanking, thinking, "Who's that person?" But it's also really quite sad, because nobody's ever going to trust that somebody actually knows them, remembers those things, and cares, kind of in the same way that there are little versions of it already. It used to be a big effort to make a mixtape, and now you make a Spotify playlist. It doesn't hit the same. Or you start to remember to go and buy somebody a birthday card, and now you can just order one 12 hours before digitally that looks like it's a photo collage, but you've obviously just spent three minutes on your iPhone kind of looking at the pictures that it's pre-tagged of that person. All of these little things that just sort of erode the thoughtfulness of these gestures, I think could really escalate at an age where you've got augmented reality assisting you and AI helping.
SPENCER: It seems to me, there are two separate issues here. One is authenticity. If you're using an AI-suggested message, for example, which now people start seeing in their apps. The other day, I was going to post something on LinkedIn, and it literally suggested something for me to write to someone. I'm like, "Why would I want to use your suggested thing to write? I actually have a thing I want to say to this person." So that kind of erodes authenticity, because it's not your words, it's not your feelings. There's a way to interact with AI where it is your feelings and it's just helping you say your feelings in a certain way, but the default is it's just generating something that's replacing your feelings. But the other thing is a sacrifice or investment. If you get a beautiful card generated automatically from your phone where it just finds the pictures for you, there was no sacrifice. There was no cost to create that. And so it just means less. This idea of a costly sacrifice has more meaning inherently, because it shows that you're willing to give something up. If you spent all weekend making the card, that's a much bigger sacrifice, and it shows a lot more care. Of course, there are other ways to create costly sacrifices, even if certain things become easy and costless.
ELSPETH: And that's absolutely right. And again, it's kind of about noticing. Do we notice that there's no costly sacrifice? Do we just think, great, I'm going to accept the convenience that I never have to be thoughtful about sending somebody a card again? Or, "Oh, it's their birthday. I'm going to send them a gift." You feel good about sending them a gift, but you send it and it arrives in an Amazon box, where previously, you would have had to wrap it up and send it. I just wonder whether we notice the slow erosion of those things, and whether we compensate for them by doing other things, or whether we don't, because we think we're still doing the same act, but we don't realize that the act isn't the gift, the act isn't the card, the act is the costly sacrifice that goes into producing it.
SPENCER: Yeah, it's interesting because I don't know if the cost of sacrifice itself is a good thing. It's a way to show that you care. You can show you care about making a sacrifice, but then you're making a sacrifice. There might be other ways to show you care that are sort of better, in a sense, because they genuinely show you care, but they don't just do it through burning something, by burning your time or burning money.
ELSPETH: That's really interesting. Have you got an example in mind?
SPENCER: For example, I can show I care about a friend by noticing something about them and really demonstrating a deep understanding of them as a person.
ELSPETH: But this is exactly the kind of thing that I think AI is going to get us in the most trouble because the smart glasses or whatever, maybe it's not just AI, are going to say, "I don't know what kind of thing you're imagining noticing." Maybe it's something superficial, like they had a haircut, or maybe it's something deep-rooted about who they are as a person. But I feel like we're going to end up with these new rituals where you see someone you know and love, and you both mutually agree to take the glasses off, or to switch off the chip, or whatever it is, so that this is not being encoded. If I bring this up and remember it again in the future, it's because I was listening and paying attention; this is a raw interaction. Maybe that's the costly sacrifice, turning off the glasses.
SPENCER: Another connected topic is the impact that AI is going to have on art, which you might imagine could be quite profound. If simply describing a piece of art is enough to generate something that, to most people, looks beautiful, maybe even indistinguishable from what an artist might make, what does that do to the way we look at the value of art?
ELSPETH: That's a really interesting area. I've, as you know, slightly pretentiously, been rereading essays from the Frankfurt School and imagining that they were written today about modern technology. So Frankfurt School, for anyone who's not massively boring, was a group of 20th-century social theorists who basically developed critical theory to analyze how culture, media, and technology reinforce systems of power and shape society's ideology and our consciousness. So, quite big topics. Many of them were writing in the early to mid part of the 20th century, and many of them were writing in the context of having fled Nazi Germany, for example. These are very acute, important questions. There's a particular essay from 1935 called "The Work of Art in the Age of Mechanical Reproduction." It is basically about what happens to art. What is art in a world where the Mona Lisa can be printed millions of times over on a key ring, for example? Does the idea of making something a copy of a copy of a copy diminish its value? Is there something about the essence of the thing itself that's important, the kind of aura? What about even moving something out of the time that it was created? Obviously, time will pass, and things will naturally move out of the time they were created. But if you move it and hang it in a museum in a way that it was never intended to be displayed, what does that mean? What does that do to it? I think these are really quite basic questions. A lot of what it's interrogating is that, way back when we first started making art, you couldn't move it from its location because it was cave art on a wall, and it was very tied to its time and place. Or it was something that you couldn't reproduce in a straightforward fashion because it was carved out of a particular piece of stone. That piece of stone was very important. Once you start drawing things in portable ways, and particularly once you start moving into digital or reproducible print art, all of those things get eroded that you would think of as the very definition of a unique piece. I've been rereading that and thinking a lot about it. I think the most striking takeaway is that all of the things that were concerning and worrying back in 1935 about mass reproduction are the exact same fears that we have today. That's not to minimize them and say, "They came to be nothing, because I don't think it's true that they came to be nothing." It's just so interesting that there are no new crises that we're having, as far as I can tell, over this new era of technology.
SPENCER: One potential difference could be that AI art today is often really hard to tell if it's made by AI. Recently, I played this little game where you try to guess, and I don't think I can necessarily guess. There's definitely some AI art that's obviously AI, like it has the wrong number of fingers, or the faces in the background look really weird, but we're getting to the point where at least some of it, from my point of view, is not distinguishable. Of course, probably a forensic expert could tell. Maybe a great artist could tell, but at least from my point of view. Do you think that it does change anything, or not really?
ELSPETH: One thing that will certainly change is the incentives of human artists, assuming they continue to exist. A much more prosaic version of this is that now, if someone applies to a job and I'm reviewing the applications, suddenly the value of a spelling mistake has flipped on its head. You used to think, "Oh, really bad attention to detail. I can't believe they made a spelling mistake." Now you think, "Oh, wow, they didn't use ChatGPT because they've made a mistake." You sort of end up with this kind of perverse system where you want to read back through your application and write at the very end, just switch around two letters so that it looks like a real human writer. I wonder if we'll see something like that in art. I guess a little bit like movements such as surrealism, which were obviously absurd responses to classical forms' expectations. I wonder if we'll see things like that. Of course, the AIs will catch up eventually, but it would be quite interesting to see how it changes the way that we move forward relative to what might have happened otherwise.
SPENCER: That's really interesting. So artists will start to make purposeful mistakes, but different types of mistakes than the ones AIs make. So, you know, oh, this couldn't have been made by an AI. And then eventually the AIs will be trained on them and then try to copy those mistakes.
ELSPETH: Yeah, an annoying youngest sibling.
SPENCER: So even if the problems that AI art presents today are not really different from the ones that were presented by a kind of mechanical reproduction. What do you see as some of those biggest challenges?
ELSPETH: I think maybe it's broader than art. There's this other essay that I was reading, I think it's 1941 by Herbert Marcuse. So it's some social implications of modern technology, some of the things he's talking about are really fascinating. So we're talking loads at the moment about how AI is going to be this amazing co-pilot for humans. This is kind of the tagline of everybody that's selling AI. It's like, "No, no, it's not going to replace you. It's going to be a co-pilot. It's going to allow you to do your job better. It's going to augment you." And there's this really interesting bit in this Marcuse essay where he's basically like, technology is the thing doing the job, and humans are the co-pilot, they chip in and correct it when it sometimes goes wrong. And we think that technology is assisting us, but actually, we're very much at its beck and call. And he sort of talks about some quite benign examples of how technology changes the way that we live our lives. The example he uses is the invention of roads and cars. Suddenly the way that you travel across the country is sort of preset by somebody else, somebody else has done all the thinking, built the infrastructure, and you just follow the line on the map in the vehicle that they've made, and we wouldn't think of that as a bad thing. That's much better than having to figure out how to do it yourself on horseback or on foot, but it does change the way that you experience a cross-country journey, for example. And I was thinking about, if you could kind of show people in that time the sort of tyranny of the personal computer and the fact that we've all got neck deformities and fingers and forearms that aren't operating the way that they probably should do, because we all spend so much time hunched over our computers and typing on keyboards and using a mouse, people would probably be horrified that we've basically been tamed into this strange way of living our lives by this machine. And then you kind of forecast it forward and think, "Well, what's the version of that that comes out of this kind of new AI age?" Also, he talks a lot about the idea that technology is often pricing efficiency over humanity. So if those two things ever come into tension with each other, efficiency will always win out. We've kind of talked a little bit about that already, and then he also has these interesting sort of things about whoever owns the — this is very kind of Neo-Marxist, I guess — infrastructure of the technology, the dominant technology of the day, also has complete social control. So it just is very interesting to think about Elon Musk buying Twitter, and what that's all about through that lens. It's like, "Yeah, you are buying the technological infrastructure." You're buying the forum, you're buying the marketplace for ideas in order that you can have social control. It's not a technological acquisition at all. It's about social power.
[promo]
SPENCER: It seems to be, sometimes, through control of technology, you have individuals or groups exerting control, but a lot of times it feels like nobody's driving the ship. It's like technology's just doing its thing. There are a bunch of companies in competition, and it shifts culture, not necessarily in a good way, but nobody's even intending for that to happen. It's just sort of a consequence that occurs through lots and lots of different companies offering products.
ELSPETH: Yeah, I think so. And sort of, again, it's about incentives and people responding to them. But what are the kind of incentives that those things create? So one of the other things I was going to talk about is kind of rage bait. And the idea of the rage economy, which I really hadn't appreciated at all, until I started creating stuff on TikTok. And then once you get into the TikTok creator fund, which is where you pass a certain number of followers and you have a certain number of views in the last however many days, you start making money from TikTok. So the more people that view your content, the more people that engage with your content, the more money you make. And I just had this realization where I'd gone from one day not being in that creator fund to the next day earning money from my videos. And I made this pretty silly video about how it turns out that mice react differently to male and female researchers in a lab. And this could be potentially quite important, because if my studies are being confounded by this interaction with the experimenter, we sort of need to know that. But people immediately started arguing in the comments about basically relating this to the question of the whole sort of transgender issue, essentially. I was like, "What the." And I immediately had this reaction. I was like, "I don't want people to be arguing in my comments about this. This is not about this. People are saying offensive things. People are going to get upset." And then I thought, "Oh, hang on a minute. I'm making money from this. Now it's good if people argue with my comments, this is great. I'll just sit back and kind of rake it in a little bit." And I just had this moment where I thought, "Oh, shit. This is why all this kind of rage-bait content gets made," where you're deliberately trying to kind of divide people and get them to argue in the comments because people make money off of it. And I just hadn't, I've just been really naive and not quite realized how strong the incentive was towards doing that. And I think that sounds sort of curious and probably just fine and fairly insignificant when you're just thinking about individual creators and individual videos, but when you think about how all of that ladders up and so what we end up paying attention to, and what kind of dominates in the discourse on these big forums, it really quite meaningfully changes people's perception of reality. I think because all of these topics that nobody actually cares that much about, they're just guaranteed to get everybody angry, end up getting loads of that time.
SPENCER: It seems some creators will put out stuff that is very black and white, where some people are going to be like, "Yeah, you got it," and other people are like, "What the hell are you talking about? No way." It's that binary kind of reaction. Some people are like, "That's totally wrong," and other people are like, "Yeah, that's exactly right." This creates an effect where people will start arguing. Some people will share it because they like it, and some people share it because they hate it, and it kind of spirals out of control.
ELSPETH: Absolutely. I think this is kind of what fueled the tradwives trend, but these videos with women doing these traditional wife activities and roles. So Nara Smith is the really famous one, where she will be joined by me while I make Oreos from scratch or something. It's exactly what you just described. Half the people are there going, "Wow, this is incredible. I love the way that you're living your life. Gosh, that's astonishing. You can make this," and half the people are there going, "Oh, my God, you're a slave to the patriarchy," and they end up fighting with each other. It's incredible.
SPENCER: One of the most ridiculous examples of this that I've seen, of using rage bait to get lots of traction, is this group that makes these videos. It's like a cooking video, for example. They'll be like, "Okay, we're gonna make this amazing thing. You won't believe what it looks like when we finally unveil it," and they will drag it out an unbelievable amount of time. It's annoying; they're about to show you the final result, and then they get distracted, and they're about to show the final result, and then they show you something else. What happens is, first of all, it's just maddening to watch because the payoff takes so long. But in the comments, people get so angry, and they start commenting angry comments. But of course, that drives the algorithm, so you'll have millions of views on this content. I don't even think anyone likes it.
ELSPETH: I completely agree, although my favorite one I've seen is this woman who, I think she basically just travels around to medium-sized towns. It's a very clever strategy. It'll be a town where there are maybe a few hundred thousand people that live there, and then she'll make a video saying, "I love visiting this town in the UK." She's American, and she did one in my hometown, Hull. Hull is in the north of England, and she goes, "I love visiting Hull in the Midlands." Immediately, people start going, "It's not in the Midlands. What are you talking about?" She's being, on the face of it, absolutely lovely in this video, saying really complimentary things, but all of it is just deliberately a bit wrong. Every single person will then tag somebody they know that lives there, and she's just going around doing this. You think, "What a genius. This is such a great way to get paid to go traveling."
SPENCER: It's genius. And also so messed up. This is not what should be driving our attention. Our attention should be based on how valuable things are, and does it get facts wrong that enrage you?
ELSPETH: I think a more serious version of that is that if you take these big things that end up dominating a lot of culture war conversations, like trans rights, for example, there's a big gap between how much people think others care about them and how much people actually care and think about them. People see lots of other people talking about this, creating a majority illusion where you think, "Gosh, everybody else must think this is really important because the majority of people are talking about it." They're not. It's just the accounts that are blowing up and getting lots of followers, creating the illusion of a majority. Because you think that everybody else must care about it, you're like, "Well, I don't really care, I don't think about this that much, and I don't necessarily agree with all this, but I'm not going to say that, because obviously I'd be in a minority." Then everybody sits there thinking they're in a small minority, when, in fact, almost everybody feels the same about it, and no one says anything. You end up with this completely distorted reality where this thing seems to be important to people that nobody's actually thinking about.
SPENCER: That's a good point. The further to the extreme they push those views, the more likely they'll get attention. If you post something very moderate on Twitter, it's probably not going to go viral. But if you say, "I believe that nine-year-olds who are trans for two minutes should all get reconstructive surgery," that's going to anger a lot of people. If you say, "Trans people don't exist, it's just a mental illness," that's also going to enrage a lot of people and get a lot of attention. It's really the extreme viewpoints very few people have that are going to cause this kind of reaction.
ELSPETH: Absolutely.
SPENCER: This is actually really relevant to one of the topics we were planning to discuss, which is survivorship bias. What you see on social media is essentially those posts that survive the selection process of what made it through the algorithm, and they may be very non-representative of what's on social media in general. They may be a very small percentage of the posts, but this topic of survivorship bias is much bigger than that. You mentioned that you feel it's the master of all biases. Why is that?
ELSPETH: To clarify for anyone who might not be familiar, survivorship bias is a type of selection bias, as you said, where we only see and therefore focus on the survivors. That might be successful people in their field or even literal survivors. One example is in World War One, different types of helmets were introduced on the battlefield, and suddenly people started saying. "These helmets are really bad because they're causing some quite bad head injuries." They were completely wrong. People would have previously died of the same head injuries, but the helmets were protecting them enough that they would survive. There was almost a catastrophic mistake made. It's looking at the literal survivors in that case and thinking they're the total picture, and that you can deduce all of the information from only looking at that. A more prosaic example would be deciding you're going to get up early and go to the swimming pool to swim laps, and then you get really annoyed that you're the slowest person in the pool. You think, "God, I'm the slowest swimmer in town." No, you're the slowest of the people who got up at 5:30 in the morning to go swimming. That's a pretty select group. The reason I think this bias is the kind of bias to rule all biases is that we find it incredibly difficult not to fall for it. It's incredibly difficult to remember that we're not seeing all the information. Even if we do remember that we're not seeing all the information, the whole point is we don't have the rest of the information to make the comparison to, so it's not terribly helpful even to notice it. All of the stories we tell as a culture that is a product of our history, who we think we are because of the events we understand happened in our past, that's all just survivorship bias. History is written by the winners. The people who survived got to tell the story in their way. On a family level, we have that where there will be family legends and the stories that get told every Christmas. If you really want to know about a family, you should think about what you don't mention, what are the things that we're not allowed to say, because that's going to tell you a lot about what's being censored. That's going to tell you at least as much as what's being celebrated. On social media, whether that's survivorship bias or selection bias, you can argue the toss on it. Again, we're surrounded by this digital world where people are only showing the one photo where everybody was smiling, rather than the 15 where they weren't. If you're at work and you're constantly being promoted through a system, you're going to end up in the 5 a.m. pool feeling at some point where you think, "Gosh, everybody else around me is so smart and intelligent. I can't believe I'm the imposter in the room." You may be the imposter in that room, but that room is a very small group of people who made it that far.
SPENCER: I think about this topic, I think about Warren Buffett's description of a nationwide coin flipping contest, where imagine the huge number of people. Every day they wake up, they flip a coin. If they get heads, they go on to the next day; if they get tails, they're out of the competition. This goes on for many days until eventually there's only one person left, and they're being interviewed on TV, and they say, "Well, how did you do it? How did you learn to flip coins so well?" The person gives some kind of convoluted explanation of why they are a good coin flipper. Obviously, in this case, we know that they're not good at coin flipping. There were just a lot of people flipping coins, and one of them would be the winner. In an elimination tournament, if all you get to hear from is the person that won, they're probably not going to say, "Well, I just got lucky." They're going to give you some explanation. In ordinary life, the problem is we can't tell who's flipping coins. We can't tell if this was a series of just lucky coincidences, or did the person actually use some strategy that was really brilliant? I think about this also with someone like Alexander the Great, who had an incredible series of battles that he won. He's revered in history, but it's like, "Okay, well, maybe he was this incredible leader, or maybe throughout all of history, there are coin flips happening on whether you win or lose a battle, and someone's going to be the coin flipper." Not to say that he isn't really talented; it's just really hard to know in retrospect.
ELSPETH: I was actually debating whether I think survivorship bias is the biggest kind of master bias that rules or biases, or whether I actually think it's our ability to rationalize randomness. I've kind of had these two things in tension, and what you've done is just brought them both together and kind of shown them to be the same thing, where it's like, maybe, yeah, maybe survivorship bias, at least in its historical version, is just the coin flippers.
SPENCER: But why is this so central? I absolutely agree it's an important bias, but why do you kind of raise it above other biases?
ELSPETH: I think it's probably slightly arbitrary. If I thought about it very hard, I might change my mind, but I just think it colors our perception of everything because of the things that I talked about. The entire world that we're seeing is the kind of unlikely product of whatever survived. I think it probably changes every belief that we walk around within our heads. Like you said, it could just have come about from random chance, but we think it came about from a meaningful trajectory where there was some sort of fated element to it.
SPENCER: It actually helps explain the popularity of so much self-help because a premise of a lot of self-help work is, "Hey, I did this thing. You can too. Here's how I did it." But if you think about it as survivorship bias, it's like, "Well, okay, yeah, you did this thing. Assume you're not lying. You did do this thing. But (A) can you actually explain why you did it? (B) Do you really understand the role that luck played? And (C) even if you do understand those things, is it really applicable to other people?" Maybe you're just really weird in a perhaps in a great way that's incredibly hard for anyone else to replicate, or where your strategy works for you, but not for someone else who's not like you.
ELSPETH: I think maybe there's a positive version of this, and maybe we can kind of bring it back full circle to talking about construal level again, where sometimes somebody's super successful, and they say, "Here's how I did it." And then they kind of go on to say, "And you can do it too, because do something like some exercise, where it's like, think of the big vision of what you want to accomplish, and then think about what you could do tomorrow to get one step closer, and then think about what you could do in a week. And you kind of build up from there." You're not changing the fact that they probably achieved that success through random luck as much as anything else. Surely hard work but probably, you have to be in the right place at the right time for the hard work to mean anything. But for the person trying to emulate it, you are increasing their chances of being in the right place at the right time because you're forcing them to think in that kind of low construal level about, "Okay, how could I get from where I am to where I want to be?" Or this is my kind of justification for I know we're both astrology skeptics, and your wonderful research backs up that we are, in fact, correct. But I think one of the reasons that people are so into horoscopes and tarot and runes and all of these kinds of things is that they help them do better quality thinking than they would do without those prompts. So if you cast your runes and it says, "Right, this is what you're going to need to sort of think about," or you pull some tarot cards and it says, "This is what's likely to happen." It sets you off on a path of thinking, "Okay, how do I get from where I am today to what this thing is saying is going to happen in the future?" And if I sort of manifest something that I want and try and build a kind of concrete journey towards it, what do the steps look like? People just find it to be a really useful way to help with doing that. And I think it can also be really harmful. I don't think those things are necessarily a net positive, but I think there's a good kind of psychological explanation for how these things might work, even though the kind of magical version of them doesn't.
SPENCER: Right. One thing that tarot does is it gives you a framing on your life. Through the drawing of random cards. It's saying, "Look at your life through the frame of destruction or death or life or rebirth, or whatever the card means."
ELSPETH: Yeah, absolutely.
SPENCER: I just wish that people could separate what is sort of a useful tool versus what is actually giving you knowledge about the universe. I think that not keeping them separate can be quite dangerous, where, "Okay, this is just a fun thing, but then, okay, now you're actually making a serious life decision, and maybe you're being influenced by what something's telling you, and you're kind of not separating the fact that it's just a tool for reflection," versus actually giving you information about reality.
ELSPETH: Absolutely. I totally agree with that. There's something you just said that reminded me of, I'm quite interested in the idea of which, maybe is exactly what tarot is doing. But can you look at the same event as though it was a different genre? So, what if it was a romantic comedy? What if it was a horror movie? What if it was a serious drama? I think that's often quite a good abstracting way to think about a situation that you might be in because it helps pull out different features. What would you major on if it was funny? What would you major on if it was tragic? And it can help you kind of see things in a different light.
SPENCER: It's funny how the same problem viewed through different framings can feel like different difficulty levels. We might feel it's an impossible problem, and then we switch frames and suddenly, "Oh, that's actually easier to think about." I've heard of people doing this, for example, with, "Should I quit this project? Oh my gosh, I don't know." It's not going that well, but maybe I should keep sticking it out. And then someone's like, "What if you had to? What if you weren't currently working on the project, and you could join it today? If you joined it, you'd be in exactly the situation you're in now. Would you do that? And they're like, "Oh no, of course not." And then it's like, "Oh shit. Suddenly it's an easier decision."
ELSPETH: Yeah. When you do cognitive behavioral therapy or anything where you're trying to address catastrophic thinking, for example, or particularly if you're sensitive to, "Oh, everybody's looking at me. Or, God, I look like such an idiot in that situation." Sort of social anxiety. One of the things that you'll learn to do is to think about, "What if I wasn't there? What would have happened in my absence, or not necessarily in my absence, but if I'd been somebody else?" Often the answer is exactly the same thing. It's absolutely nothing to do with you. This person was just having a bad day. But I kind of love that. It's basically a nice way of saying, "What if you weren't the main character?" We've got this kind of cultural obsession with being the main character at the moment, and it's kind of nice sometimes, and a proven psychological technique to ask, what if you weren't?
SPENCER: I think that's a useful point of reflection. What if you weren't the main character, Elspeth, thanks so much for coming on today.
ELSPETH: Thank you so much for having me. It's been lovely.
[promo]
JOSH: A listener asks, "What fields of mathematics do you think should be taught in schools?"
SPENCER: I think it's hard to say that there's one right answer for every person, but clearly basic arithmetic, basic usage of things like percentages. I would also say that for people that are more advanced than that kind of stuff, some basic probability is really important. Maybe some basic statistics, but I wouldn't focus on kind of learning a bunch of statistical tests, but more just kind of the fundamental concepts in statistics. Things like the fact that every mean has an uncertainty and having a basic sense of [how] as the number of data points rises, the uncertainty falls. And so you can get a more reliable answer by having more data and you should take that into account. And the fact that you could have means different from each other, but in a way that it's probably just a result of random chance. Whereas you could also have other means that are different, but it's very unlikely to be due to random chance. So these kinds of basic concepts. I think geometry is not as important. Like I think people should learn some basic things. They should know what a triangle is, and basic concepts of area and circumference and so on. But I think a lot of the stuff taught in geometry is probably unnecessary and not that helpful in ordinary life.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: