Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
May 19, 2022
What does it mean to be "groupstruck"? How does groupstruck-ness differ from the bystander effect, normalcy bias, and other related cognitive biases? How do we break people out of being groupstruck? What does it mean to be a "bounded" person? How can we build up better decision-making heuristics? What sorts of decisions do people usually not quantify but should (and vice versa)? How can we make rational relationship decisions without coming across as "calculating" or cold? How does anthropic reasoning affect our hypotheses about the nature of the universe and life within it (i.e., the Fermi paradox, the simulation hypothesis, etc.)?
Katja Grace is a blogger at worldspiritsockpuppet.com and researcher at aiimpacts.org. Follow her on Twitter at @KatjaGrace.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast and I'm so glad you've joined us today. In this episode, Spencer speaks with Katja Grace about group dynamics and social pressure, how to be a better person, and quantitative versus qualitative methods.
SPENCER: Katja, welcome.
KATJA: Thank you. It's good to be here.
SPENCER: So the first topic I want to discuss with you is an interesting form of bias that might prevent the world from dealing with really important challenges. Do you want to tell us what being group struck is, and why do you think that matters?
KATJA: Yeah, I'm not sure what it is, but I use the name for a set of strange behaviors it seems like people have when they're in groups. So the kind of situation that inspired thinking about this is like in this experiment in the 60s, some people pipe smoke into a room where they had people filling out a survey. And they found that if there was just one person in the room, that person would worry about the smoke and get up pretty quickly. Whereas if the multiple people in the room, like I think three was the number they had, then they will sit there for much longer for some reason. So you might wonder why that is.
SPENCER: My understanding is it was even longer if they had confederates in the room that just would sit there acting as if nothing bad was happening.
KATJA: Yeah, that's true. I think that's sort of less surprising to me, that if the people around you are pretending nothing's wrong, that you won't do anything. Whereas in the case where they're not even pretending they're just real people, it seems more surprising, if just no one asks.
SPENCER: I ran into this actually happened to someone I know. They were sitting in a lecture and fire alarm started going off. I don't think there was any smoke at the beginning. But basically, the teacher ignored it, and then all the students ignored it. And then I think there's sort of a smell of smoke. And so my friend finally who's like sitting at the back of the classroom, gets up and walks out of the classroom, whomever is just staying there. So yeah, I do think that this sort of happens in real life, though, I don't know if anyone's tried to replicate that original study, or what happened with that.
KATJA: Yeah, I don't know about proper, real replications of it. I know, someone made some videos that like roughly did a replication and put videos on YouTube of that. So you can see what people look like when they're thinking about whether to leave. But I think the main observation is that as the person in your friend's situation there somehow it's like really hard to get up and leave, even though you kind of know what the right thing to do is. And I think there are a lot of situations that feel a bit like that, to me, at least. So without knowing what the explanation is, my friend came up with the word group stuck for it. And I think it's interesting to figure out whether it's a general thing.
SPENCER: Would you say that the early response to COVID was like this? I had the really intense impression, once I became convinced that COVID was going to be a really big problem that I sounded like a crazy person saying it and I ended up doing a blog post in late February. And you know, some people actually got angry at me for doing it saying I was being, you know, alarmist or whatever. But I definitely had this really strong sense that I was like breaking some social convention by talking about it. And I know people that were significantly earlier than that I wasn't even the earliest to raise an alarm.
KATJA: Yeah, I think that is another good example, probably are also, I think, more notable to meet with early COVID was just like wearing a mask or something like another situation where I really feel this sort of pressure on my behavior in a group, which is really strange. I think in early COVID, that people went wearing masks when I wanted to wear a mask, I just felt really, it was embarrassing to wear a mask, especially like a big mask. I'd like to wear like a P100 mask that's much more effective. If other people are just wearing surgical masks or something. Like my brain would think that it had to explain itself and imagining what I would say if someone asked me, why are you wearing this, much more effective mask? Sort of surprising because you know, it's much more effective. But why should I be terribly embarrassed about that, especially once you know, that society at large is doing all kinds of things about the pandemic?
SPENCER: Yeah, I noticed a lot of people wearing cloth masks long past the point when you think that once you realize that they probably don't work as well. But that sort of became like okay, as long as you were wearing a mask. Actually, the other day, I saw one of the more disturbing advertisements I've ever seen, which was this thing called the unmask, which is basically to trick people into thinking you're wearing a mask because it looks like a mask but has no effect whatsoever.
KATJA: I think an interesting thing here though, is that it's also embarrassing to not wear a mask when other people are wearing a mask. Like maybe I guess you might have this unmask so that you can follow the rules and be allowed into places there's something but I think also people I talk to say that it's embarrassing for them to be outside without a mask if other people around the wearing them even though they think it's like fine because it's outside or something and so I think that's a case where you're not being over cautious, you're being under cautious. But you still somehow have the same kind of like not following what everyone else is doing shame or something, which makes me think that maybe it's not just about, looking too afraid. But it's rather a broader thing.
SPENCER: Oh, yeah, absolutely. I totally agree. Yeah, I mean, I saw cases of people getting yelled at for not wearing masks on the street. But as far as I can tell, is to this point, evidence suggests that if you're more than six feet away from people outdoors you're probably very safe. And so yeah, I think it really doesn't have much at all to do with it. Like the actual other safety has to do with what people around you are doing, and kind of what's considered normal and acceptable. And there was a really startling tweet that was sent early out during the pandemic. And I might get the exact details wrong. But basically, an epidemiologist went to an epidemiology conference just as people were beginning to worry about COVID. And they posted online saying, you know, for all of those who are saying we should wear masks, I'm at a conference with 100 epidemiologists, and there's not a single mask insight. We are all using lots of hand sanitizer, though. And the point of this was to, like, show everyone that they don't need to be scared. It's just aged so badly. And it does make me wonder where some of those epidemiologists like, oh, I wish I could put the mask on, but nobody else is very mad. Right. Yeah. So weird, you know? But to generalize this point a bit, because I think this is a really important topic. How would you define group struck in the more general case?
KATJA: I think they actually don't have a great definition. I have more like a long list of situations where it feels to me like a similar thing is going on for some reason. And I'm sort of still at the point of wondering how exactly to define such a thing. But it's broadly it's cases where people are acting, apparently not even in their interests. And it seems like some sort of social pressure is going on. But I think especially with it, the reason for the social pressure is kind of unclear. It seems like there are cases where people are constrained by people watching them to not look silly in some way, where we'd obviously look silly to do the thing they would want to do. And that seems sort of less interesting.
SPENCER: Yeah, it becomes, I think, especially fascinating when someone literally is putting their life at risk in order to do the thing that's normal.
KATJA: I've seen cases of this where someone goes to the doctor for something really serious, and the doctor tells them something that clearly makes no sense.
KATJA: You know, I mean, there are a lot of good doctors, but there are so some bad doctors, or the doctor misunderstands something or whatever. And the person doesn't do like the very basic like pushing back and being like, No, I think you misunderstood me read? No, that doesn't really make sense. And here's why. And then they just come away, like, oh, the doctor told me to do this, you're like, wait, but like, we're talking about your health here. Like, really, you're not gonna like, say some new doctor. Yeah, these situations look a bit like the kind of bystander effect situations that are especially interesting, I think, when it's your own interests that are at stake. Like maybe it's not that surprising if 100 People are watching an emergency for someone else, but one of them doesn't act to intervene because they're hoping someone else will do it. But if the emergency is like their own, maybe being caught in a fire, you might think that they wouldn't sort of stand by and watch.
SPENCER: It seems just in general, that we treat social rejection as way more important than it is by any reasonable, quote, objective standard. In other words, people will be terrified of giving us speeches in front of strangers that you've never expected to strangers again, and you're like, Well, what's really going on here? My suspicion is that there's something about human nature where like, those kinds of scenarios were, you'd have social rejection and the ancestral environment, you know, 50,000 years ago, might have just been much more dangerous than they are. Whereas today, there are so many people and it really doesn't matter, even if like a bunch of people thinks you'll click a fool, because you're not going to be like thrown out of the tribe and die alone in the woods or something.
KATJA: Yeah, that seems right. They think in some of these cases even if we accept that humans really don't like social rejection, it's sort of hard to understand, why they're expecting so much social rejection anyway. They're in an experiment where people get us which lines look similar in length to them, and they go with the answer that other people say even though the other people are actors, who are saying the wrong thing to get them to conform.
SPENCER: Like the Asch Conformity Experiment.
KATJA: Yeah. If they said the wrong answer, they're like, how much are they expecting? Or if they say a different answer to everyone else? In what scenario would that cause you to be socially rejected to any logic sense.
SPENCER: It's almost like we're paranoid about social rejection. It's not like, we're just worried about the chance that people tease us a little bit. It's like, we're worried that we're literally gonna die or something. And so we act just in a way that just seems totally crazy sometimes to avoid even a small probability of extreme social rejection.
KATJA: Yeah, it doesn't feel internally like I'm expecting to be kicked out of all of my social connection or something if I'm seen out in public wearing a giant mask. And so I think that part of this explanation seems confusing to me.
SPENCER: What is your internal experience? If you're about to do something that you feel like others will be like, why is she doing that?
KATJA: I think it involves a lot of trying to justify myself, somehow my brain is in some kind of loop for it's just coming up with things to say if someone would ask me, so maybe it just doesn't get as far, but what if I just didn't have anything to say? And they asked me, it seems as if I imagined that I don't especially imagine anything that bad happening. So it's sort of confusing.
SPENCER: Yeah, I know, you mean, sometimes if I have a worry that people are going to think I'm weird for doing something. Its just in advance coming up with, well, if they say this, and I can say this, or whatever, like to make it seem.
KATJA: Yeah.
SPENCER: Yeah, I don't quite know what to make of it. I mean, I think my most extreme example, this is just sometimes I'll go do public speaking, and I'll feel totally fine and calm. Other times, I'm just still really nervous. And the best thing I can say about it is, it's like some part of my brain thinks I'm like going into battle or something. There's some little part of me, that's freaking out, even though my system, too is like, come on, it's not a big deal. You've given 100 talks. Why is this matter? And I find that it's especially likely to occur if it feels sort of out of my comfort zone. Like there's something different about it and other settings I've been in or different audiences or this kind of thing. So what are some other examples of being group struck?
KATJA: One that happened to me once that I vividly remember because it was so shocking to me at the time I was in high school, I was watching across the schoolyard, I guess some other students putting pins in people's legs, or needles or something, they're coming up behind them, and then putting the needle into their legs and then running away. And I guess at the time, I was pretty ignorant about the world. And I thought that admitting this meant there was a high likelihood they were spreading AIDS between the people or something. So it's like, oh, no, these people are going to die or something. And so I was very concerned about this. And I was like, I should tell a teacher, and somehow I just couldn't, I was just paralyzed. I could see there was a teacher there. And I feel like it took my editor at least 10 minutes to get up and tell a teacher about it. And I think that it was somehow that there are all those other people there. And they would see me going to do it. And I couldn't understand why I couldn't do it.
SPENCER: Hmm. So that's a really interesting example, to acknowledge, that brings to mind a few possibilities. One is when we talked about, maybe we have this deep-seated fear of being ostracized from the tribe, right? We kicked into an evolutionary environment. If you're not in the tribe, you're basically dead. Another possibility is could we fear mob mentality? Can we fear a mob comes down and you know, kill us? Right? People will just suddenly turn on you. And obviously, by saying fear, I don't mean that you're subconsciously raising it out. I mean, just on some, very low-level programming, you know, this is kind of behavior that helped our ancestors survive. But then a third possibility that just comes to mind, is it could be that we are used to tribe thinks for us, and something is going on where it's like, wait, but the tribe seems to think this is okay. And I think it's not unlike the tribes thinking is part of what I should do some kind of very strong rooted mimicry.
KATJA: Yeah, I think that makes sense to me. And I think my default expectation would be that the individuals would pay attention to what the rest of the group is thinking and do a back and forth updating on how another is responding, but then I guess it's kind of strange if the group is just always slower at say, responding to signs of a fire, because you might think that's the wrong response. I guess in all these cases, it is, you know, it is an experiment. So it is, in fact, the right response. It seems possible that the group is right here.
SPENCER: Hmm. No, but it's, as you point out, in cases where the group is wrong, it seems at the very least if it's a mimicry response, and it's kind of misfiring, John Hyatt has a really nice quote, which is that he thinks of humans as being 90%, chimp and 10%. bee. And he's referring there to sort of the way that bees almost acts as a single unit. Sure, you can talk about a single bee as having agency but you know, the right level of analysis for bee is probably closer to the whole hive having agency. And if we think of humans as being a little bee-like, maybe there's something where, in a certain sense, we're like, always working on behalf of the tribe and not just on ourselves, right? Or at least many of us are most of us and maybe part of what's happening is we're outsourcing our thinking to the tribe, at least in part. And so for decisions, what should I eat for lunch? That's the trim part, but then certain other things, maybe we just, automatically start outsourcing the thinking.
KATJA: Yeah, you might wonder, I guess I'm thinking of the video I saw. The fire alarm went off, and the whole group of people heard it and did not get up. And then they interviewed them afterward. And it seems many of them were hoping that someone would weed them out or save them. And it seems if everyone behaves in this way the people are identical. And they all kind of seems to me like this is bad. But I don't want to kind of stand up and lead the group outside, you might expect the group just always stays there. And so I wonder if it's partly to do with different people behaving differently. And if you had someone who was more Leaderly, that's what's needed.
SPENCER: And that's interesting if you bring random people together and put them in the room, and none of them are sort of leader to take charge kind of people. Maybe nobody says anything. But if you brought, an alpha leader and they would just immediately be, hey, there's smoke. Let's get out of here. And I'm like, yeah, great.
KATJA: Yeah, that you might think that by chance, you would get some of the alpha leader people. But it could be that people respond differently to different contexts, such that if you brought everyone in randomly, none of them feels they should behave like an alpha leader because they don't know the other people and stuff. Whereas if it was, I don't know, a group of friends, then it would be clear, who was the one to be, hey, guys, let's pay attention to this.
SPENCER: Yeah, it really does seem there are individual differences on this, right? You know, people talk about the Milgram Experiment where they were they got people to administer electric shocks to someone they thought was a real person. You know, this study has been questioned in various ways but seems at least in some of the cases, people really were thinking they were electric shocking, someone that they didn't know. Yet some people who believed that they were that this was a real study, just refused to do it, right? What made it sort of shocking was how many people didn't refuse and how many people went through it. But then you look at those who did refuse and what's the deal with those people? That's pretty interesting, right? Maybe it comes down to individual differences in personality and things like that. And so maybe there are important personality factors here.
KATJA: Yeah, that seems right. I think I tried to be better at doing embarrassing things, which is also a good old purpose excuse for someone would ask you why you're doing an embarrassing thing.
SPENCER: There are some embarrassing things that sort of created a negative externality, right? Because they kind of embarrass everyone around or are uncomfortable. And there are other ones that really are totally harmless. If anything, they're kind of amusing to other people. And CFAR Center for Applied Rationality has this cozy exercise, they call comfort zone expansion stands for where they have you go out and pick something that makes you uncomfortable, but won't make other people uncomfortable. And have you go try to do it. And there are some pretty funny ones that people came up with asking people, hey, do you mind if I have your shoe and things like that? And you know, one person actually is persuaded somebody gives him. But, yeah, there really does seem to be something to developing the skill of being able to push through this comfort, when you know that discomfort is just your own thing. It's not actually harming anyone else.
KATJA: It's also interesting to think about how to make this kind of thing easier for people in general at a societal or group level, that the thing we were just talking about is learning yourself to overcome this kind of encumberment. But you can also think about how to make a situation better for people to be able to just get off if they heard a fire alarm or something.
SPENCER: Right. I imagined if everyone had to, secretly record how they felt about the fact that smoke was coming through the door, right? And nobody else would see. And then like, nobody else would know that it was them. Maybe everyone is like, I'd rather leave, you know. I mean, the classic explanation for the smoke room experiment is that everyone's kind of looking around different all of sudden being, nobody else is freaking out. So I guess it's fine. Right? So, everyone was taking cues from everyone else, but because everyone else is also taking cues from everyone else, and nobody's sort of acted yet everyone assumes it's okay.
KATJA: Yeah, you might wonder why you would have that behavior in general, it seems very close to a behavior that would actually work where you take cues from one another and actually escalate like you see that everyone else is a little bit concerned. And then you look into it a little bit more, and so on, which I think does happen often.
SPENCER: Right. But if everyone's to control their response, you can't read any concern, and it can go the other way. Do you have any other examples of being group struck?
KATJA: You're sure you're just wearing a weird outfit in public? Is another eating place to do something like this? I think maybe, sometimes, parties tend to form giant groups of people. And I like having smaller group discussions. And my impression is other people often like small group discussions, too, but it's very hard, I think, to just get up from a large group discussion and just go and stand by yourself until someone else comes to 20 without looking at your phone or pretending that you're busy with something else.
SPENCER: That's funny. The other day I was at a party, an outdoor kind of rooftop thing, and my friend after work comes up to me he's like, I'm really impressed with how well you get asked to conversate. There is really an art, okay, this is nice, but I'm ready to move on and making it not awkward and being able to kind of actually fall on your preferences.
KATJA: Yeah. Something like the group struggle. So it does happen there, especially if there are several people, and you'd have to say something to leave, you can't just walk away without comment that can be surprisingly hard.
SPENCER: Everyone can be stuck in a conversation they're not [unclear]. It's awful. When I've thrown some gather town event which is this online service for doing, you have a little avatar and you can move around the space. And then when you get near people, they can hear you and you can talk to them via video chat, I actually tell people at the beginning, let's use a social norm, where you can just if there's more than one other person, that conversation, you can just leave without saying anything. Because if you don't do that everyone gets stuck in these large clusters, which is really not great when you're on video chat, especially. And so it's much better if there are more than two or more other people there, you can just leave and just go form your new group.
KATJA: Yes, that works. If people behave differently.
SPENCER: Yeah, it definitely helps. And I think part of the reason it helps is because nobody knows what social norms are supposed to exist in gather town. Right. Like in real life, we have a sense of what's normal? And so people, if you just said that to people that wouldn't necessarily. No, it's okay. But he got there telling okay, I guess this is how COVID works, right?
KATJA: I think I just want to try that at a real party now.
SPENCER: Well, I guess if you get everyone to opt in, in the beginning, I think it would work.
KATJA: Tell them, it might work as well. But yeah, my friend had a party once where at the start, she gave people stickers that said whether they were meant to be in larger small group conversations. And if they're in small group conversations, they just like weren't allowed to have people in a conversation or someone else joined, like someone just had to leave.
SPENCER: Yeah, I'm actually really fascinated with the way you can set a social norm. So I run this group called ergo, where we throw social experiments. And we do a lot of playing with this, where, basically, when we sent out an invite to the event, we basically explained what the rules will be at the event. And so we've opt in, if you're gonna come here, the rules. And so that's really nice way to, filter for the people that are, interested in trying those rules. And we've experimented all different set of rules for different events, and they really kind of can drastically change how the event goes.
KATJA: Oh, that's interesting. Do you write up things about it that can I read about this?
SPENCER: Occasionally, we have a website, our website is ergoevents.org says E-R-G-O E-V-E-N-T-S.org. So I know that you have us thought about how this idea of being grouped struck applies to topics like AI safety. What are your thoughts on that?
KATJA: I was originally thinking about this, because Eliezer Yudkowsky wrote this post, saying that there are no fire alarms for artificial general intelligence, which is to say that, people are kind of acting as they do, in the case of smoke coming into the room, they're seeing some evidence, but not wanting to look too scared. So they just keep sitting there not doing anything. But if there was a fire alarm, maybe they would give them all common knowledge that it was now not embarrassing to leave the room, which I guess doesn't actually fire alarms, it seems don't actually cause people to leave the room happily, necessarily. But you know, maybe you could imagine a better fire alarm that did that.
SPENCER: So the idea is that if one day we build very dangerous AI technology, a lot of people will be kind of waiting around being, I don't yet have permission to forget about this. And then by the time that they do sort of freak out, it'll be too late. Is that the concept?
KATJA: Yeah, well, I think Eliezer would probably think they should already be freaking out, but don't have permission. But also, as things look more and more dire, they will just continue to sit there and not do anything because situation will sort of be the same district like more smoke or something.
SPENCER: And so do you have any thoughts about how do we break people out of being group struck?
KATJA: My guess is that actually, as people get more evidence that helps somewhat, they aren't just sitting there forever watching more and more smoke coming into the room, or more and more impressive AI things appear without acting. So I think that sort of thing helps. My guess is that evidence that there's a problem that is more objectives that they can point to justify their concern is more helpful, whereas if it's more relying on their own personal judgment of the evidence, then there's more scope for other people thinking that their judgment is bad or something.
SPENCER: Can you elaborate on that?
KATJA: Yeah, like if I think I'm having a heart attack, because I feel weird, then other people might think that I'm pretty anxious, and they might think that I don't have good judgment, that sort of thing. Whereas if I thought I was having a heart attack, because they had some sort of device that said, in an objective way that something was going wrong with my pulse or something, then that's more helpful for me being able to easily panic about whether I'm having a heart attack to other people because I can point to the device. And they might be like, alright, well, that's objective evidence.
SPENCER: Right. It's just basically defense against being thought of as a weirdo or, you know, odd or whatever. If you can point to something that is sort of outside of yourself or more objective and respectable for why you're thinking this way.
KATJA: Right. I think another interesting class of ways to help people, not the group struck is just providing other incentives for doing the thing. So I think, at a party providing different places that people can go and stand for other reasons, to get a drink or something, or to look at a thing allows them to reason to leave a conversation, or providing events that are fun to go through that are also about concern about some risk allows people to be ambiguous about how concerned they are.
SPENCER: Right, sort of like giving plausible deniability, right. Sometimes someone when they go on over conversation, it's like, Oh, you know where the restroom is, or something like that, right? It's like, oh, well, this person may not be leaving because they don't like me or not. And during the conversation, maybe there's, you go to the bathroom, right. And it's like it if you have that kind of cover, then you can do weird things or break social norms without being punished for it, or at least pursue yourself and be punished. I actually have a funny trick I use in terms of conversations when I'm ready to talk to someone new. I'll just say, hey, you know, it's really nice to meet you. And then all sorts of like, we're finishing the conversation kind of tone. And then they're like, oh, yeah, it was great to meet you, too. And, you know, we shake hands, and then I walk away. It's kind of funny how, you think you need to give an excuse, but you don't necessarily actually, you know. Although now I said this on the podcast, it doesn't mean I didn't enjoy talking to a person, or I'm just ready to go to the bathroom or talk to someone new.
KATJA: Yeah, I didn't even call it a trick. And it's just like, do the straightforward thing.
SPENCER: Don't overthink it.
KATJA: Somehow, it's very hard to do.
SPENCER: Or maybe it's like, the thing that you're actually worried about, on some level is probably like this person feeling bad or thinking badly of you. And then, you can just cut through that by making clear that we're glad you met them. Assuming you were right, and, you know, and then being warm, and they can feel good. And then they don't feel bad. So kind of solve the problem that way. But I wonder if there's a version of that for like, existential risk. What is the correct way to make it? So people are okay with, I think we should be freaking out a little bit or at least here's a sign that if this occurs, we should freak out?
[promo]
SPENCER: So changing topics, another interesting concept that you brought up with me was how to be a good bounded person. So can you tell us what does it mean to be a bounded person? And then let's talk about how to do that well.
KATJA: Yeah, I think a basic theory of how to act in the world is to maximize your expected utility, which is to look at each of your options, and then understand the consequences, and then rate those consequences according to your preferences. But this is very impossible. Without, I mean, sort of gonna take arbitrary amounts of effort. And so by a bounded person, I mean, one that's not capable of kind of infinite amounts of thinking for every choice of action.
SPENCER: So then what does it mean to be a bounded person?
KATJA: So I guess I'm saying all people are bounded, abstract people sort of in theory, are perhaps capable of considering all of their options and doing this kind of expected utility maximization. Whereas a real person, if you wake up on Saturday, and you have nothing planned, and you think about what to do, you can't just consider every combination of different muscle movements, and then for each one, predict the entire future of the universe, or distribution of a different future of the universe, and then apply your values to all of those to decide what to do. So I'm interested in how we might think of how we behave instead of that.
SPENCER: It's funny how economics at least classically assumes that people are unbounded, right? Like he assumes that somehow people are considering every possible muscle movement they could do with every single moment. I mean, implicitly like that's not, you know, talk about too much. But that is seemingly the assumption. As someone once said to me, it was just I was kind of funny, even once one single being that was what economists describe humans as would probably just take over the world. Like if there was just one of those people, right? It would be infinitely intelligent, right? We're bound in the amount of computation we can do. We're bounded by the number of different options we can consider. So we have creativity bound, what options we can even think of, we're bound in working memory, how many things we keep in our mind. So we have all these different bounds on us as beings, right? And so then, how do you think about the optimal thing versus the thing that a bounded person do?
KATJA: I'm not sure what the best way to think about it is, my impression is that they'll often of what humans should do as kind of the unbounded thing, like an approximation of that. But given that's so far from what we can possibly do, it seems to me it might be better to have a clearer picture of what the approximation looks like.
SPENCER: It makes you think about imagining someone describing how you should hit the ball and baseball. You know, in theory, you would understand the physics of how things move and solve some differential equations for the initial velocity of the ball and air viscosity and all this stuff. But obviously, we're not starting to do that. So instead, you should just approximate the differential equation, and solve it approximately. That's not how you go from here's how to do it perfectly to here's how to do it. As a bounded person, you do it by doing a completely different method, which is like just practicing hitting baseballs.
KATJA: Practice living life. It seems still nice to have an abstract idea of what the thing to do as a well-rounded creature is so that you can reason about it.
SPENCER: Yeah, absolutely. I mean, I think when whenever we're trying to do something, it helps to have both an explicit understanding of it and an implicit understanding, that we can just do it automatically. And then by having the extra understanding we can kind of guide and train are implicit understanding, a classic example would be martial arts where you can learn to do martial arts just by copying someone and repeating a lot. But if you also have an explicit understanding, then you can critique what you're doing and be like, oh, that wasn't quite right, for these reasons. And then you can make your own adjustments, which can then help you train yourself and get better.
KATJA: Yeah, right. A proposed abstract description of part of what it is we do that I sometimes think about is instead of seeing us as considering all the options, suppose that at each point, like any tool time block, we can basically consider two options. And two options are determined by salient somehow, it may be when I wake up, I have the option to keep my eyes closed, or to open them first. And then after that, suppose I decided to open them, then I have the option to look at my clock or not, or something. So it's just a sort of a choose your own adventure, with just splitting pods the whole way.
SPENCER: Maybe it's slightly more accurate to say that your subconscious mind is only showing your conscious mind this kind of binary decision, but your conscious mind is not choosing which ones to consider most of the time. So we don't let to the subconscious is kind of a black box. In some senses, doing a lot of processing to decide where the decision points are, right?
KATJA: Yeah, then we might be interested in how is that black box considering? Because it's not considering everything either, probably. Then it's interesting to us what it's considering and also how it decides what to split up.
SPENCER: Right, yeah, it's really not considering everything. And it's using some kind of first algorithm itself, it's just as much harder to inspect on that algorithm. Because we don't have conscious access to it.
KATJA: Yeah, we might be able to notice things about it from the outside. For instance, if things are visibly salient to me, it seems more likely that I will end up considering a decision involving them. If they're Coca-Cola as often in my vicinity, maybe I will more often consider the choice. Should I drink Coca-Cola now or not? Whereas if I just didn't see that, maybe I wouldn't make that choice.
SPENCER: Right, right. A lot of research goes into noticing what the subconscious mind decides, right? So like in user interface design, you learn things about how to draw attention to maybe there are 10 buttons on the page, but you can draw attention to one button so that the choice for the user becomes do I push that button or not? Do I look at the other, you know, what do I do about the nine other buttons? Or we know things about like, well, if there was a sudden loud noise, almost certainly, you would look at that, and that would become the center of your focus. And so then your decision might be doing I investigate a lot of noise? Because we know certain things will cause your subconscious to prioritize that.
KATJA: Yeah, it seems all this kind of thinking about this makes more sense on this model of abandoned feature where they're only considering a small number of options. So that seems a nice thing about that model. I guess research into advertising maybe makes less sense in the classic utilitarian world of super features.
SPENCER: Right. Well, ads in that world would just provide you with information that you didn't already know. Because you know, even in that world, you still have limits to what you know, in any given moment. And so they're like, oh, here's the fact that you didn't get hurt or something. Right. But that's clearly not how most advertisings work, yes, some of them contain facts, but they also contain a lot of other things, someone's looking really satisfied, or someone drinking the beer with a very beautiful people around or whatever.
KATJA: Are the implication that this is what everyone else is doing. So you might feel bad and good luck with that.
SPENCER: Yeah, exactly. So this view of life is a kind of a chooser and adventure story where all you get to choose is whether to do X or Y in these series of binary choices. How does this influence your thinking? Or what should we draw from this?
KATJA: I think one important way would change how you behave is then it becomes very important, which options do come up, like, what the salient choices are, that get presented to you. And so you probably shouldn't just leave it to advertising agencies or other people who are trying to make you think about certain things. If you can see ways to change your own future options, that's going to be very powerful.
SPENCER: Yeah, that's super interesting. Like one way to think about as you know, advertising companies are trying to put on your limited set of choices for the day, the choice whether you drink Coca Cola, right? And you know, social media sites are trying to put on your list of their limited list of choices for the day, do I check Facebook, and maybe now they've replaced like 50 of your choices throughout the day with do I test to check Facebook, because that's sailing into your mind. And you have this addictive impulse to check out or that kind of thing.
KATJA: Right. Whereas you might think that if you put other things saliently, where you can see them or something if you have a book that you want to read sometime, if you put it in a place where you will see it, perhaps that choice will sometimes arise, which might not be enough to get you to actually do it. But it makes it much more likely than if, if you just never thought of it.
SPENCER: It seems to fall under a class of behavior change strategies that I use a lot of myself where I basically model my future self as a different person than me; I actually feel very identified with my future self and my past self. But when I'm thinking about behavior change, I'll temporarily adopt a sense of trying to model someone just like myself and what they would do and trying to set things up now, so that this person, just like me, will behave the way that I think is best. And there are a lot of tricks I do around that. And sometimes they're around like, oh, leave a note to me to make sure or set alarm go off at just the right moment. So that I'll redirect my attention to a certain choice, right that of the time. Or sometimes it's like building a habit so that when this happens, I automatically think that is the next thing without having to remember.
KATJA: Yeah, I think those seem to fit into this framework. And I guess the thing that replacing the thing that you would do, if you were thinking of your future self as the same as your in the usual way would be like, changing your future behavior by just intending for it to be different or committing to being different. And maybe that would be more effective if you were less bounded creature, and could just remember, at all times, all previous things that you intended or thought about.
SPENCER: If you were unbounded, I'm not sure you'd ever need to attend anything. You just were gonna calculate all possibilities anyway, right? It's easy to question why the intended to do something ever changes anything? But this does seem like it can. It doesn't always. But it seems if you don't intend to do something, you probably won't do it. But if you intend to do it, you might do it. It's just like a first step towards actually doing something.
KATJA: Yeah. But I guess in these cases, where you, for instance, leave a note to yourself, you do intend to do the thing. But you also like, set up the physical environment to remind you of your intention, again, instead of just intending.
SPENCER: Right, and also trying to form a model of why am I not doing the thing, right? Because if I thought the reason I wouldn't do the thing is the lack of motivation, the giving myself a note is probably not going to help unless that note contains on it something that motivates me, which maybe I could do, but if I think oh, I'm probably just going to not think of it at that moment. And if I had thought that I would have done it, then just all I need is a reminder. So maybe you know, a note or alarm on my phone. So once you kind of have a theory of why you're not gonna do a thing, then you can try to construct an intervention to make that choice salient in the way it needs to be. So you actually do it.
KATJA: Yeah, I guess a reason I often don't like having alarms and notifications and things is that feel overwhelming, or I start to ignore them. And I think maybe that also makes some sense on this model where it's like, well, you have a certain number of choice points. So it's something and if you try to cause your future behavior by slotting it into a lot of your choice points, then you might feel you don't have enough spare slots for whatever else is going on. And so it'd be nice if you could somehow cause it to happen more automatically so that it wasn't a choice always.
SPENCER: Right. Like, if you made it into a habit, maybe it would actually take less willpower or less sort of mental friction to do it. This is actually why one of my favorite types of interventions is okay, so you want yourself to do something? Is there a version of it? That's fun? Because if there is just right, I want to set like, you know, people would be like, oh, man, I hate like running on the treadmill. And I know I should do it. I'm like, Okay, well, do you like soccer? Do you like everything? Do you like martial arts? Because if you do, if there's any other exercise you like, just do?
KATJA: Yeah.
SPENCER: I think the bounded person idea is really interesting because it shows why heuristics can be so powerful. But you know, because we're not perfectly rational agents, we have to operate in her ethics, we have no choice. And then that suggests, well, maybe we could learn useful heuristics that even though they're far from optimal, given our limited willpower, bandwidth, working memory, and so on, they actually are pretty good to stick by. And I also think about principles are something we've been delving into lately because we're building a clear thinking module on principles and define principles as sort of pre decisions that you've decided in advance what you're going to do before a situation has arisen. So for example, you might have a principle of always telling the truth to the people you care about, right? And then when you're in a scenario later, and you're like, do I tell the truth, you just have to ask, well, this is a person I care about, and you really know the answer. And so you're, you're kind of pre deciding. And then you could say, well, why bother pre deciding anything? Why not just every single situation, just decide at the moment? Well, maybe at the moment, you're gonna be tempted to do the wrong thing, or you're gonna be exhausted, or, you know, it's too cognitively demanding. And so you kind of pre deciding based on these principles that most of the time lead to good outcomes, you actually maybe can get better outcomes overall. Also, maybe it feels less cognitively demanding, and opens up more choice points for other decisions.
KATJA: Yeah, it seems if the considerations are similar in each of the cases, that you save a lot of effort in the long run by just doing the calculation once or something.
SPENCER: Right. And I think there are some other potential benefits of having principles, maybe if you have a principle, do anything, you build an identity around it. So as long as it's a good principle, it's helpful, you build an identity around it, and then it becomes a habit to start doing it. And then maybe you become more the sort of person that just always behaves that way, right? There's something really powerful about not just being a person that intends to always be honest, but just being a person that is honest, by default, right? And then other people were, wow, this person is really honest, I can trust them when it comes to something, right?
KATJA: It seems like being transparent to other people is a sort of general upside of having principles.
SPENCER: Right. And you can even if you make them explicit, you can even talk about them. And so you can signal both with your behaviors and with your words at the same time and kind of really show people what sort of person you are.
KATJA: Yeah, if I think of myself as a bounded person, a different way than I think of my decision making differently is, I think people often take for granted that you should have plans for the future like, what are you doing in the next five years or something? What will you be doing in five years? And that seems the kind of thing that makes more sense. If you're unbounded, if you're playing chess, and you could just imagine the entire game tree ahead of you, and how when it would make sense to be in 50 moves, I'm hoping to have the board and the setup.
SPENCER: Well, it assumes both unboundedness and a sort of deterministicness, right? Because imagine you're playing poker, and you're a perfectly rational agent, where you're like, well, it depends on what my opponent does, right? I can't tell you I'm gonna do it. Right.
KATJA: Yeah. Which Yeah, also comes up in real life. It seems like in chess, the thing you do then is, well, I'm not a good chess player at all. But my understanding is that they do is have an idea of which board positions are good. And so you don't think for that many terms. But you have a good sense that if you did these next few moves, you would get them as a better heuristically better position. And so I sort of wonder whether you should do that in life much more than people often seem to think. Instead of saying, I hope to be in this particular job, or role or something, and be married or something in five years, you would say, I have no idea what I'll be doing in five years. But I generally have a sense of which kinds of situations are better. If you gave me a couple of different projects, I could tell you which one I liked more, and going to sort of move in the direction of things that seem good, I guess if you went extremely in this direction, you might just sort of every day do what seems good without a particular goal for where that will lead in any period of time.
SPENCER: It seems to me that both extremes are not ideal, like choosing a long-term, very specific goal like no I'm going to get exactly this position can be really a problem because well, maybe life circumstances make that impossible. Or maybe you're going to over-focus on that getting that exact thing and lose sight of the reasons you want the thing because there might be other opportunities to get the value that you're seeking. That's under need that particular goal? And so you're gonna miss these other updates or better ways to get that value. Right? You're over-focused. On the other hand, if every day, you're just saying, do what seems good right now, it feels like it's very hard to get to very long term ambitious achievements, because they often require sort of just pushing really hard in one direction and overcoming obstacle after obstacle for a long time.
KATJA: Yeah, I think that seems right. So yeah, my guess is that I think you do somewhere in between, as I think also is for many kinds of activities like chess, you do still think if I've made this move, they'll make that move, and then I'll make this move or something somewhat.
SPENCER: But I think you've made a really interesting point that the same way that a chess player doesn't evaluate every single move in the future, but evaluates maybe a few moves in the future, but is able to, for any board position, real or imagined to be able to immediately say how, roughly how good that board position is, if we build a better heuristics of roughly how good is the situation, then that seems really powerful or making decisions? Because we can say, ah, well, if I were to do this, that would put me in this given situation, and I can tell how the good that is. And if I did this other thing that put me in this other situation, it's a very powerful hack for not having to kind of iterate through all these future branches of a decision tree, which is impossible.
KATJA: Yeah, I guess I sometimes think about what it would be like to just have a much better sense of how valuable different things are, you could just practice a lot say, you make it automated, would you rather game that just keeps offering you options? And then it could maybe tell you, were you inconsistent, you could practice and become good at evaluating things consistently in a way that you approved of.
SPENCER: Right, I guess the challenge is, you don't know how good they actually turn out to be right, you can maybe you can detect inconsistencies, but not like whether they're good choices.
KATJA: I feel I would still do better than I probably do just making choices in general, because my untrained judgments are probably at least that bad at the thing actually being good in the long term, but also of inconsistencies, or if I made a bunch of similar choices, I would realize at some point that I didn't really like this in a way that was foreseeable.
SPENCER: Maybe the way that a chess master might use certain signs of progress in the game, we can use certain signs that we're getting to better and better places, for example, they might have a notion I'm at, you know, chess player, but they might have a notion of how much control of the board they have, or well, obviously, you know, how many pieces they have, or stellar player, or you know, how locked up the other player's pieces are some of them that's right. And, you know, we might be able to use heuristics how much freedom? Do I have saved my life? Or, you know, how many resources do I have? Or how happy am I? Or maybe how much do I know about the things I want to know about or this different sort of heuristics that we're moving in the right direction, even if we don't know where we're trying to get to?
KATJA: Yeah, it seems like there are so many different ones, that it's harder to have a high-level picture that isn't chess.
SPENCER: Right. It's interesting to think about every month doing a rating on different dimensions of how your life is going. And if you're actually making progress or not, maybe there's something to that.
[promo]
SPENCER: So one issue that this kind of thing brings up is this distinction between quantitative methods are ways of providing numerical values to things or thinking in terms of efficiency, where you can take some number and optimize it, versus these more kind of qualitative ways of evaluating things and saying how good they are. I know you have some thoughts about this one.
KATJA: Yeah, I guess one thing that I've thought about is it seems people often don't like the quantitative ones, as much or feel they're kind of cold or likely to be bad somehow. I think that's interesting in that, especially the word efficiency I've thought about, it seems like, people kind of associate it with, coldness, and maybe missing out on things that you care about, which is interesting because it kind of just means getting the thing you're trying to get as well as possible, which seems it should be good.
SPENCER: Right? It reminds me of this quote that says attributed to Einstein. I don't know if Einstein ever said that, which is that everything that can be counted does not necessarily count everything that counts can't necessarily be counted. So I wonder if that's part of where people come from on this that they think we're talking about efficiency. You're talking about the things that you can easily measure and the things you can use to measure probably not the things that really matter.
KATJA: Yeah, I think that's my best guess about what is going on that whenever in practice, people are trying to do well, on some sort of metric. They're using a metric, rather than just a vague sense of what is good. And then the metric is missing things out. And then people are unhappy about that. It seems this suggests that you might think that if you try and do well, on a metric, you would at least get more of the things you were you did include in the metric, and maybe at the expense of the other things. And you might hope that that was overall good. You know, we wouldn't keep doing this quite a lot. But the fact that people sort of seem to feel unhappy about it suggests that it might be that bad.
SPENCER: It seems it depends on how ruthlessly optimized for a metric, I tend to think if you pick pretty much any metric and optimize it really intensely, you're saying I'm willing to admit, my only goal is to squeeze out more on this metric. I don't care about anything else, and I'm gonna push really hard in that direction, that tends to produce really bad things because there's almost no metric that really calculates what we care about. But if you're just saying, oh, nudge it a bit in that direction, like take, you know, one gradient step. Usually, that's fairly safe, because you're keeping in mind that you don't actually care about it infinitely. And is that necessarily the case, though, after this one great step, that's going to be your highest priority anymore to go further in that direction.
KATJA: Yeah, that's interesting. And then I guess the times that people are like, this thing is really efficient. That's especially when they've gotten really hard on the metrics, you can see that the numbers are looking very good. And so then those tend to be the worst circumstances.
SPENCER: Yeah, imagine a society where they're like, Actually, the only thing we care about is GDP per capita. That's it, right? And then immediately, you're like, okay, well, that involves kicking out poor people because that will make GDP per capita go up, right, that's clearly not what we want. So, trying to fit, just try to pick a metric where it doesn't immediately lead to perverse strategies, it's actually surprisingly difficult.
KATJA: Yeah, I guess I'm not sure what to do about this, in that, optimizing less ruthlessly seems potentially good, but it's kind of hard to specify what that is, maybe I'm just assuming that one has to specify things and optimize them ruthlessly.
SPENCER: What approach I like to take is trying to have multiple metrics, this comes up in software products were like, okay, you know, what do we really care about, and people sent us talked about, you know, your key performance indicator, and it's because no single metric is actually what you care about. I like to think about it as you have a bunch of metrics, each of which is a flawed indicator, but it sort of related in some way to what you actually care about, as you're kind of tracking a whole bunch of different indicators that you're following is kind of a swarm of indicators, trying to move them in the kind of the right direction. And you often alternate between them are okay, for this month, we're going to focus on, you know, conversion rate. And so this month, we're gonna focus on retention rate. So you know, it's too hard to keep them all in mind every second, but you kind of looking at them all. And you know, you're taking turns kind of optimizing on each of them, while watching how the other ones move. And then reminding yourself that none of them are actually the real thing, the real thing you care about is something else that can only be described in words, and even then it may be difficult describing words, but it certainly can't be described as a single number. So you're trying to notice if the swarm of indicators is doing well, but somehow you're drifting from the actual thing you care about, in which case, you maybe need to add more indicators to get closer to the thing you care about.
KATJA: Is this different from just having like a very complicated indicator?
SPENCER: Well, I think it depends on what you mean if indicators as a single number are sort of very, very hard to use a single number and capsulate enough stuff. But if your goal is to your goal with a product is to improve people's lives as much as possible, right? So one indicator might be how many people use our product? And Another indicator might be once they start using our product? How long do they stick with it? Another indicator might be about when they do stick with our product? How much value do we think they get out of it? But that's enough because maybe it also has to be a sustainable business. And so you need other indicators that are about your profitability, right? And so on.
KATJA: Yeah, I guess I was imagining maybe you could just have one number how many people use your product multiplied by some number? Plus how sustainable is it in whatever way you measure that times another number, etc.
SPENCER: I think it's very tough.
KATJA: I do imagine that going badly.
SPENCER: Yeah, I think it can be very misleading. Because once you have that big number, you kind of then in order to reason about it, you have to pull it apart again, to actually understand what's going on. And then why not just look at the whole swarm of indicators, yes, maybe adding the combination of them as yet another indicator is useful, right? But still, I would want to look at all the different pieces of that and see what's going on. It also gets especially hard when you get things like oh, wait, I also need my team to be happy. But how do you combine team happiness into this metric that also includes profitability, user retention, and marketing it gets, it seems incredibly hard to like combine into one number, or at least like a number that does actually track what you value? Right, right. And what you value is just like just thinking your If that is very, very complex, and it's none of the metrics really map onto it, but just correlated.
KATJA: I guess an interesting thing to me then is like, how do you manage to pursue that when, like, if you're not having various metrics, and you're not trying to be efficient or something? Are you just pursuing that unspoken sort of things relatively well? Or, like, what? Why does that do better in some sense, or by other people like that?
SPENCER: Well, you know, I think it's a good question. I think the general consensus in the startup world is that having metrics actually is really important. If you don't have them, you just have to do a bunch of stuff. And you don't tend to make a lot of progress. I mean, obviously, there are exceptions. Some people do really well. They're using some kind of internal metric. But I think it's generally believed that having external metrics actually helps you will do a better job, and it makes it harder to bullshit yourself, right? Because you see, something is changing. So I think there is a lot of value in these metrics as imperfect as they are. But to your point, you know, I think it's worth wondering about the distaste, people have for, quote, efficiency, or did this taste people have for quantifying things? Is there some wisdom in that? And are they you know, should we take that seriously, that so many people assume turned off by that?
KATJA: I guess I'd like to try to take it seriously. But I'm not sure where to go with that.
SPENCER: Well, what sort of context Do you think people tend to bring this up the most? And I sometimes see it in the context of doing good, right, like, quantifying doing good, loses something important? Or maybe government policy trying to quantify too much?
KATJA: Yeah, I think if you try to quantify things in your own life, also, that might bring it up, especially if it's, I don't know, I guess there are places where I don't see it brought up that much. But where I sort of imagined that it would get brought up if people were quantifying the things, which is that's a different data point. If you tried to quantify your spending time with your children or something and try to just hit some markers of successfully being with them or something. Right?
SPENCER: Well, I think that if people quantify how much they sleep, that's really considered slightly weird. But if people quantify how much quality time they spent with the children, people think that's very weird. And maybe what's going on there is this idea of sacred goods versus quantification of the springtime of your children's sacred in some sense, it's deeply meaningful. And when you're in the mindset of quantification, it's the wrong mindset to do a sort of sacred activity.
KATJA: Yeah, that seems right. That seems kind of interesting.
SPENCER: Like quantifying how much you like all your friends.
KATJA: Yeah, or like quantifying how good different people are on some metric? Is the thing that I've sometimes seen people do and don't seem to have any.
SPENCER: Right. So then are there good reasons to find it distasteful? As we said, a lot of people seem to have, a natural kind of negative reaction to it. But what are they pointing at? What's there value in that?
KATJA: I guess, maybe not to do with the codification so much. It's just actually containing spicy information. Especially as you share the information.
SPENCER: Right, if you rank, everyone, I think if I recall, correctly, Mark Zuckerberg, before starting Facebook made a website where people would rate the attractiveness of college freshmen or something like this, which seems like a really, really bad single. Maybe part of what's going on is that we want to be around people that aren't doing a calculation when they interact with us. So imagine if a friend invites you out to get coffee, and then they offered to pay, and then you realize that they wrote down in the ledger, how much they spent on you, right? And then they like, expect you later to, like, pay exactly the same amount, we would be like, Oh, that's not really the kind of interaction, right? Because we want them to offer to pay just because they want us to be happy. And then later, we'll offer to pay because we want them to be happy. Not feel we're creating the ledger like it's there. It's not a tit-for-tat relationship. It's like each other, we value each other relationship if that makes sense.
KATJA: Yeah, I guess that also makes me think that just in general if someone is calculating things, it suggests that they're a bit potentially being more kind of Machiavellian and maybe treating you as a means rather than an end or something. Whereas if they're not calculating, and using their intuitions, it's sort of more likely that they're being driven by their gut feelings or something. So to the extent that you hope that they really like you or something, or are moved by the sacredness of a thing or something if you sort of expect that to go more through their intuitions, then calculating things is evidence that they're not doing that.
SPENCER: Right imagine you could spend time with two different people and in the past, they both treated you exactly the same, but one treated you well because they like you and care about you, and the other treats you well because they were analyzing just like tit for tat relationship. And they realized that treating you well would be the best outcome. It feels like we would much rather hang out with the first person. Even if their behavior so far has been, you know what I'm saying. Maybe we prefer to hang out with virtue ethicists deep down rather than utilitarians deep down or something like this? What do you think about that?
KATJA: I think in that particular scenario, I do imagine feeling warier of the calculating person, which is sort of interesting in that the calculating person is more predictable, in some sense. It seems if I continue to cooperate with them in this way, presumably they will continue to protect nicely, whereas the other fella.
SPENCER: You can apply a game theory to them, right.
KATJA: But yeah, exactly. That makes me feel more wary.
SPENCER: One thing I find kind of funny, and maybe I'm wrong about this, but it seems to me that a lot of effective altruists who are very steeped in the kind of utilitarian philosophy, actually behave as a very virtuous people, which I like, because I think in my momentum of interactions with someone I don't want them doing a calculus about is this conversation, the maximally good thing for the world? You know that feels very distracting to me. I want them to hang out with me because we're honest, he's a really good example. I want them to be honest with me because they're honest, as a person, not because they did a calculation and decided not lying to me in this instance, was better than lying to me.
KATJA: Yeah, it seems you can sort of separate these behaviors into calculating this and goal-directedness, or something, where you can imagine a person being utilitarian in the sense that they're trying to do the thing that is most good in some sense, or most consequentially good, but the way they're getting the answer, but what is good is by consulting their sense of what will go well, rather than calculating. And I think that makes me feel better. I wonder how you feel about that?
SPENCER: Right. Well, it's interesting, imagine someone who really likes you, and they really want to have a good friendship. And so they quantify every after each interaction with you, they go home, and they put in a spreadsheet quantifying how it went. And then try to think about, okay, how can I make go better next time, I kind of felt that quirky, but endearing in a way. Maybe it's a little obsessive, but they care so much. And they're just trying to bring their nerdy quantification to the process of having a better relationship. Whereas if it feels someone has some ulterior motive, or whether or not they're quantifying it, like, oh, they're spending time with me, in order to XYZ goal has nothing to do with me, then I think it feels bad either way, where there are complications involved or not.
KATJA: Yeah, right. As in if they just think that broadly, the future will go better if they hang out with you. And they haven't thought about why or anything that still feels kind of more alarming.
SPENCER: Right. So maybe it's just that in many cases, quantification is a signal for this calculating this, right? In theory, they're not the same thing. But often, when you're calculating, numerically, you're being calculating, you know, strategically if that makes sense.
KATJA: Yeah. So in the case that where your friend goes home and calculates how to be friends with you, it's also strategic, just strategic, where they do care about you.
SPENCER: Right, I mean, strategic for some alternative and right.
KATJA: Yeah, what if they do care about you? And they also think that it's good to care about you for some future event?
SPENCER: I mean, I think that's okay. As long as it sort of depends on the amount, right, if it's like, 90%, they're hoping this good thing will happen to them in the future, because we hang out and 10% It's like, they're just being with me, then I think that's not great. But it feels totally fine to have, you know, at least a little bit of reason, you'd like someone to be some other thing. But if it becomes too big, then that's a problem. What do you think?
KATJA: I guess I feel like there's a different thing where people, it's somehow good for me to just deeply care about people, I think that my life will go better, and the world will go better if I just have some people really care about. And then they go ahead and really care about some people, where it somehow is doing fine on the utilitarian calculus and also involves kind of genuinely caring about the person. First of all, this doesn't actually make sense in the end. But I think it's just sort of a peeling way of combining the things.
SPENCER: It seems also, that when we want someone to like us we want them to like us for certain reasons we actually care about why they like us. And if like their reason is just they wanted to care about someone and we happen to be sitting there. It feels like they don't care about us in the same way. They just care about a person who just happens to be us.
KATJA: Yeah, that seems right there I could have met like, seemed like you could have a similar thing where it's like, well, they wanted to care about or they prefer to care about someone for various reasons that made the person seem good. And you were standing there with those good characteristics that they liked, would that be objectionable?
SPENCER: Well, yeah, I mean, if you write down. Why does someone like us? Right? It's gonna become an objection. By the end of the day, there's always a series of reasons, probably not mostly conscious ones, probably none of them subconscious. But no, they like the way we talk. And, you know, they like the way we smile. And they, you know, think to say interesting things or, you know, they like the way that we asked about, I mean, if you actually break it down, it ends up just being a series of things. So this sort of gets to the issue of explicitness. There's something about being too explicit about it, it seems to kind of break the spell, right? Like, if subconsciously, someone processes all this stuff about you, and they really liked you, it feels better than if they're like, oh, well, there's three reasons I like you. And one of them is the way that you asked me questions. And another one of the reasons is that I like the way you look, and then it feels very weird and awkward, even though maybe I was like, what their subconscious is doing anyway.
KATJA: Yeah, if they had a long enough list to come less awkward.
SPENCER: It does feel less awkward. If it's a longer listen, then you feel more like they like me, and not just like, some random aspect of me.
KATJA: Right. I guess it probably feels better. It's just like, they like you in spite of any of your characteristics. They know that runs into other problems as an if they're just like, yeah, like you, it would be fine. If you became terrible at asking me questions, and we're really ugly and stuff, I'd still like you.
SPENCER: Well, then that starts to border on the feeling of an attachment that as I'm gonna do with you, if it really if it becomes independent, all your characteristics, then it's like, what is it? This isn't even about you? And me, that actually seems like the way parents have often been to their children, but not always, obviously. But like, they often seem to have this bond that like, no matter how bad their kid is, they still have this, like, really strong attachment to the child.
KATJA: Right. I guess it seems hard to have a good answer here for how to relate to other people. I mean, for one thing, it seems if you were going to like people, in explicit conflict with what is good in the future, it was like, yes, being friends of this person is going to make the world worse, long run and destroy value. Like that seems bad. So it's like if you just not allowed to check whether you think it will be good or bad to be friends with someone? Or is it just bad if you're only friends with them? Because you think it will be good? But it's fine to be friends with them? Because you want to accept? Don't do it if it's not going to be good? I'm not sure if that makes sense.
SPENCER: Yeah, I think again, it just depends on how much is influencing you, right? If it's a primary consideration, then that feels bad. Whereas if it's just a minor consideration, that seems fine. And it is maybe it's just take a step back here and say what was the thing we're really talking about? I suspect what we're talking about is that humans evolved a mode for interacting with other humans. And that mode has certain rules to it. And that if you go too far outside of those rules, you're no longer doing that thing that we call human bonding, or friendship or whatever, or connection. And so there's this mode, we can be in mentally, and then someone can, like deviate outside of that. And we're like, Ah, I thought we were in that. I thought we were doing that friendship thing. And we're clearly not because you're not in the right mode. We need to both be in the mode for it to work and things like being like, Well, I'm just friends with that person. Because I think like, it will cause me to have a higher impact or something that like the breaks that mode, so they're not in it. And so then and it's also a bilateral mode like it's the other person's, we think that the person on it, we can't be in it like knocks us out of it, right?
KATJA: And so being in the mood requires only having certain kinds of thoughts, or like avoiding having certain kinds of thoughts.
SPENCER: I think that's right. Yeah,
KATJA: In some sense, that seems like quite a big constraint or something.
SPENCER: Right, right. Thoughts that are like, Well, I'm just seeing this person for, like XYZ benefit are like seem to violate that. And so if you're actually having this thought, you're kind of broken out of it, maybe the other person can't detect it. And so that actually maybe gives you reasons to depress that's like that, right? And convince yourself that that's not your motivation, and even if it is.
KATJA: Or like even just wondering, is it good in consequentialist terms? That we're hanging out?
SPENCER: Right. Well, I think it's fine. It's fine to consider that as long as it's not the justification for hanging out, right. Yeah. All right. So before we wrap up, I want to do a rapid fire round, I'm gonna ask you a bunch of questions, just try to get your quick answer on extremely difficult questions. So are you ready?
KATJA: Yeah.
SPENCER: All right. So Sleeping Beauty argument, what's the solution to it?
KATJA: Third.
SPENCER: 1/3 okay, you want to get to say a few sounds about why it's 1/3.
KATJA: All right. So the setup is you have Sleeping Beauty, she's going to go to sleep, you're going to flip a coin. And if the coin comes up, heads, you're going to wake her up on Monday. If a coin comes up, tails, you're going to wake her up on Monday, and then give her some sort of drugs so that she doesn't remember it, and then wake her up on Tuesday. And then in any case, she's going to go back to sleep and then wake up again after the experiment. And you're going to tell her the experiments over so they're like, three different possible wakings that will happen in the experiment. It could be tails and Monday or tails on Tuesday, or it could be heads on Monday. And when she wakes up the question is what probability it should put on heads?
SPENCER: Right. And one side of the argument says, well, the coin has a 50/50 chance of winning and heads. So the problem has got to be 50/50. You know, we shouldn't learn any new information. So that's what's got to be. And the other side of the argument says something like, well, there are more people waking up in the kind of 1/3 probability scenario. So should tilt towards 1/3. Is that accurate?
KATJA: Yeah.
SPENCER: So we can put a link in the show notes for sleeping beauty argument for those who want to look into that. Do you think that AI will be a giant deck, maybe you could take a second to say what you think agentic means?
KATJA: Yeah. agentic means roughly goal directed, sort of like cares about the world being a certain way and moves toward making it that way, or like some states of the world more than others, versus kind of having patterns of action or patterns of response to things that aren't really paying attention to how the world will go. In the future, I do think that, at least, some AI is likely to be agentic. Because it just seems very useful. For instance, if you can just have another person that you give a task to, and they know what the goal is, and they can work on making that goal happen. That's better for you than if you still have to try to direct things toward that goal yourself just using various tools, it seems like.
SPENCER: So there are a lot of different narratives for quote, what's going on in the world? What are a couple narratives that you think are particularly helpful right now?
KATJA: I guess a particularly salient one is AI is progressing relatively quickly and might cause great destruction or utopia. And so how anything else affects that it's sort of one of the more important things going on with everything else, that's a narrative that I spend a lot of time around.
SPENCER: Okay, the Great Filter argument, you want to just say, like a few sentences, what that is, and whether you think anthropic reasoning implies we're gonna get destroyed in the future due to it.
KATJA: Yeah. So the Great Filter argument is sort of, it seems like there are lots of planets in the universe say, there aren't any alien civilizations that are so advanced and successful that we can see them? Or like, yes, the evidence of them, for instance, because they've actually come here, and we've met them or because they've sort of built anything really impressive. That suggests that somewhere on the path between being a random planet and giving rise to an incredibly successful civilization, there are some really hard steps, like super duper hard steps. So I think Robin Hanson named this the great filter, like this set of, I guess, like, planets are getting filtered out of their way to success. And so this is kind of interesting, because then if, if we would like to, you know, ultimately survive, and go out and conquer the stars, or live in the stars, and do things that would be visible, interesting to ask, like, are these really hard steps, you know, past or in our future? And I think maybe the ones in the past look harder, like, for instance, you know, is, how hard is it to life to begin, it's, perhaps the popular ones are thinking like, it might just be very hard. But if you use this kind of anthropic reasoning, where you think you're probably more likely to find yourself or you sort of up wait hypotheses, where you're likely to exist instead of not exist, like you take your existence as evidence that it's like one of the scenarios where there are lots of people like you, then this is like a big update in favor of thinking that the filter is in the future, because then it's like pretty easy for that to be civilizations at our level of development. But then that's pretty alarming, because then it means that we haven't hit the step yet. That's really hard. There are other complications here that might make this not go through, for instance, I guess, Karl Shulman, and you have recently written about how the simulation argument might undercut this, because maybe you should just think you're in a simulation instead, once you're using this kind of reasoning.
SPENCER: Interesting. So if I understand you properly, you're saying, if we're trying to figure out sort of what the state of the universe is, because we exist, it seems like that should give higher probability to states the universe to have more like things like us, but then that kind of tilts the balance towards thinking that the Great Filter might be in front of us rather than behind us. Is that right? Yeah. But then maybe that kind of reasoning also influences like the chance we're living in a simulation because it has a similar kind of thinking to it. So it kind of complicates matters further.
KATJA: Yeah. And I haven't thought through whether I agree with that modification.
SPENCER: All right, Katja. Last question for you. What's something you changed your mind about?
KATJA: I thought that philosophy was probably not very good. So I thought I should read a philosophy journal, I guess. And I think I actually came across an article about the Sleeping Beauty problem pretty quickly in that journal. And I was like, ah, obviously, it's such and such, there's a kind of wait, actually the other thing, and then I became very interested in it and thought about it a lot. And then, I guess, did like, honors thesis on and tropics and the great filter with David Chalmers, as my supervisors are not officially in the philosophy department, but really philosophy leading thing. And then I went to grad school in philosophy dropped off again.
SPENCER: So you say philosophy is better than you thought.
KATJA: Or at least, I ultimately consider it worthy of a bunch of my time.
SPENCER: Awesome. Katja. Thank you so much for coming on. This was really fun.
KATJA: Thank you.
[outro]
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: