CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 032: Moral Discourse and the Value of Philosophy (with Ronny Fernandez)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

March 18, 2021

What is normative hedonism? What's the difference between wanting something and wanting to want something? Should we only care about the experiences of conscious beings? What's wrong with moral discourse? Does philosophy ever actually make progress, or is it still only discussing the things that were discussed a thousand years ago? What is (or should be) the role of intuition in philosophy? Why should people study philosophy (especially as opposed to other disciplines)? What can we do to create more rationality or systematic wisdom in the world? How can we disagree better?

Ronny Fernandez is a philosophy PhD student at Rutgers University and high school dropout. He is interested in formal epistemology, human rationality, and AI alignment. He blogs at figuringfiguring.com. You can follow him on Twitter at @TrueBrangus or send him an email at anonnerdfrenzy@gmail.com.

JOSH: Hello and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast and I'm so glad you joined us today! In this episode, Spencer speaks with Ronny Fernandez about the utility of caring for the experience of others, differentiating between beliefs and reality, approaches to interpreting moral discourse, and the value of studying philosophy.

SPENCER: Ronny, welcome! It's really great to have you here.

RONNY: Thanks! Good to be here, Spencer.

SPENCER: The first topic I want to talk to you about is the topic of normative hedonism. Can you tell us a bit about what that is and why you think people are confused about it?

RONNY: I think the basic claim is something like, "The only things that matter are experiences, stuff that's going on inside of people's minds." While I don't necessarily want to say that it's wrong (for reasons I hope that we'll talk about later), I think a better thing to say might be, "It's not necessary. It's not the only thing that you can care about." I hear a lot of arguments, "It only makes sense to care about things that are in people's experiences." If you look at some thought experiments philosophers have been talking about for a long time, it seems pretty reasonable to say that you can care about things that are not in people's experiences.

SPENCER: I'd love to dig into that with you. I wanted to point out first, sometimes I hear people make an even stronger claim that the only thing that makes sense to care about are things in your own experience and not even just people's experiences. Any comments on that?

RONNY: I don't know. I guess that's an even stronger version.

SPENCER: I have this very frustrating debate that I've had quite a few times where people are like, "If someone chooses to do something, it means they think it was better for them in some sense or that they were happier doing it than not doing." I'll bring up an example of, "Imagine someone who believes in a cause and as a protest, they set themselves on fire. You really think that person thinks in their own interest to set themselves on fire? Are they happier having done that?" At the least it's a very frustrating semantic debate about what it means for someone to do something in their own interest.

RONNY: I think what might be happening is that people are thinking that somebody's only going to take an action if they believe that it's going to move the state of reality (whatever it is) up their preference ordering. If that's what you mean by selfish action, then it does seem tautological that people are going to be... I mean, that's not even quite right because there's things like akrasia.

SPENCER: If you zoom in on what the human mind is, this becomes clearer. Because if you're viewing the human mind from a distance, you're like, "It's a thing that tries to make the world into its own preferences." Or something like that, vaguely. If you zoom in, it's like, "There are all these different forces determining what we do. Habits are one of them. We're not rethinking about every action before we take it. We might just have a habit. Or a startle response, you hear a loud noise, you startle. It's not like you have some conscious goal you're trying to achieve when you do a startle reaction. So to me, all this stuff, it doesn't really make sense when you really dig into the details of what a human does.

RONNY: I can understand a person saying, "I guess that monk thought that setting themselves on fire was the best thing to do. They wanted to set themselves on fire. They decided to." Something like that. I think that is pretty tautologically true. But if that's how you're defining selfishness, then sure everybody's selfish. If my preferences say a lot about other people and they say nothing about what happens to my body, that sounds like a very selfless set of preferences. If I'm acting in accordance with them, I'm acting to move the world up my preference ordering, but it seems like my preferences themselves are pretty selfless. It is a frustrating semantic debate that happens all the time. I get frustrated by it too.

SPENCER: I think that was very well said. If you want to use a definition of selfishness, sure. But in that definition, you basically removed all the interesting stuff of what we mean by altruism. Let's go back to your bigger point which is a tougher one to argue against but also very interesting which is, is everything that should matter to us about the experience of some being?

RONNY: I want to use a similar frame which is, people have preferences over the states of the world. I'm not trying to argue right now that it's wrong to only have preferences over how beings are doing. But there's nothing incoherent about caring about other stuff. Even selfishly caring about other stuff. One thought experiment I really liked that I think makes this make more sense is, say there's this person. They're an artist (they're a painter) and they're terminally sick. They asked their wife to promise them that after they're dead, their wife will put their art up in art galleries or at least let people see it or put them on the internet. Then the person dies and the wife burns all the paintings and buries in the backyard or does something totally disrespectful to the paintings after the person is dead.

SPENCER: Maybe to make it really extreme, had sex on top of them with her new partner.

RONNY: I wasn't sure if I was allowed to say that. But that's perfect. [laughs] It seems like the next question is, did the wife do something wrong to the painter? That's one way to think about it. But also, I just feel like, if I were the painter, I would care about that happening. I don't care about it in the present tense when I'm dead but I care about what happens after I die, for instance. So I think that's one of the things that convinced me.

SPENCER: I think that's a very interesting argument. One thing I want to clarify is, are we talking about human psychology here? Like what humans are capable of caring about? Or are we talking about philosophy? How would you differentiate those things?

RONNY: That's a good question. I'm not sure. I think I am talking about philosophy. The way I'm thinking about it (right now at least) is something like, "What are the objects of care?" I think it reminds me in a way that's hard to tease out of a very common mistake in philosophy which is, someone will be like, "Do unicorns exist?" and someone else will say, "Yeah." Then what they'll have in mind is that the idea of unicorns exists. They'll say, "Unicorns exist" or an idea. But the idea of a table and the table are very different things. It reminds me of that mistake. It seems to me, people might be saying something like, "You can only care about experiences or your models of things." Whereas, I think you can actually care about events directly. I think those are the proper objects of care.

SPENCER: Well, I think a good example of that might be that people often care about what other people think about them. They don't merely care about believing that people think those things with them. For example, you might want your children to love you. It's not just that you want to believe your children love you. Let's say someone's like, "Here's a pill you can take. If you take it, you'll believe your children will love you even if they don't." It would be very unsatisfying. They actually want the children to love them, regardless of their belief about it.

RONNY: That's interesting. It sounds like the simulation thought experiment: would you be willing to go into a simulator that simulated a perfectly good life for you and you forget that you decided to go into the simulator? That one's interesting because I've met a lot of people who say, "I would happily go into a simulator." But I wonder what they would say about something that's more targeted specifically. Like "From now on, you will believe that you are rich." Or, "From now on, you will believe that your children really care about you." Without actually being rich or that your children actually really caring about you. Seems like a harder bullet to bite.

SPENCER: I was once talking to a friend who really wanted people to be attracted to them. I was trying to understand why. I kept asking them more and more questions. The reason we were talking about this is because it was actually causing problems in their life. I was looking at it from the point of view of, there's probably something underneath this desire to want people to find them attractive. If we can figure out what that is, maybe we can figure out a way they can get that thing that they want without the negative consequences that they're now getting from this desire to be attractive to everyone. After an hour of trying to dig into this, I finally came to the conclusion. Maybe there is nothing underneath this belief. Not to say that it's not valid. Quite the opposite. Maybe in fact, it's a fundamental desire that doesn't depend on other desires. That was really fascinating to me. It's hard to know for sure. Maybe they just didn't have access. They seem to have nothing else they were trying to get that was below that belief. Your thoughts on that?

RONNY: It makes sense from an evolutionary perspective, that if I were designing agents, I want to make sure they reproduce as much as possible. I wouldn't want to have them be okay believing that they're very attractive. I would prefer for them to actually not be okay, deciding from Tuesday on to believe that they're super attractive. The thing I would want them to be going for is to make sure that they actually are super attractive.

SPENCER: Because what actually leads to spreading your genes into future generations is actually being attractive, not just believing you're attractive. You could easily see why evolution could enable us to have these abstract intrinsic values like being attractive, for example.

RONNY: It's hard to make that case. Because it's not like in the environment we evolved in, there were pills around that could make you just believe that you're attractive from then on.

SPENCER: We're pretty good at rationalization. [laughs]

RONNY: Sure. I mean, there are cases where evolution does want to make you believe things that probably are not true. I guess if evolution really wanted us to have a solid belief that we're more attractive than we are, then it easily could have. Maybe it did. I don't know.

SPENCER: Some people seem to believe that. Maybe there's a bias towards us all thinking we're a little more genuine than we are, or something like that. Something I'm still a little confused about in this conversation is: if we're talking about philosophy, in what way could someone even be wrong about what they value? Why can't we just say someone could value anything and you write about it?

RONNY: I want to say that I think that's right. Valuing is a little bit of a trickier word but definitely someone could only endorse caring about other people's experiences. I don't think that there's anything I could say about that. A lot of other philosophers will disagree with me and say, "No, if they did that they would just be wrong." I think you can totally consistently only care about other people's experiences. But what I hear a lot is people saying something like an argument that shows that the only thing you can or should care about is other people's experiences or your own experiences. I think that's the thing that's wrong. Part of it is exactly because of what you just said. I think you can care about whatever you want.

SPENCER: Do you think there's any limitation? Is there anything logically or from a philosophical basis, you think we shouldn't care about or can't care about?

RONNY: Maybe logically inconsistent things. Like the thing you care about the most is making as many square circles as possible.

SPENCER: You're in deep shit if that's your value system. [laughs]

RONNY: [laughs] That's gonna be kind of tough.

SPENCER: When we did a bunch of research on people's intrinsic values (which we define as things that people value for their own sake not as a means to their ends), I was amazed at the diversity of intrinsic values people reported. I have to say, it's very hard to actually get someone's true intrinsic values. I don't think that the human brain naturally codes things as intrinsically valuable versus instrumentally valuable (in other words, as things you care about as a means to an end). We actually designed a little training program to teach people what intrinsic values are and to quiz them on common examples and then explain after each quiz question. In our main study, we had to throw away about 50% of people's data because when we would ask them these questions, they wouldn't do well enough to indicate that they actually understood what we were talking about with intrinsic values. It's pretty freaking tricky to get right and to do the right thought experiments. Finally when we threw away half the data of the people who didn't really seem to understand very well and then we listened to people's intrinsic values, they said a just amazing range of things. We ended up boiling them down to 22 categories. Each of the categories is quite broad. One category is virtue which has all kinds of things in it, for example. That really pushed me toward thinking, humans really value a huge number of things. But there's still a limit. In other words, 22 categories is a lot. We didn't find people valuing having seven thumbs or something. There's many things that almost nobody values.

RONNY: I guess when I say you can value anything, I'm not necessarily saying that humans will in fact care about anything. Probably the things that humans care about is what minds could care about in principle. However, I want to be careful here but with some tentativeness, I do think you can (as a human) endorse caring about anything and not be consistent. You can care about whatever and not endorse caring about it. You can say, "I wish I did care about this stuff." And you don't need to have any care that relates to it.

SPENCER: Absolutely. I've encountered an interesting thing when I talk to people from the effective altruism movement sometimes. Where it's quite difficult to carefully distinguish between different things you care about versus things you want to care about versus things you think you should care about. That kind of subtle distinction. There can be a thing going on where it's like, let's suppose that you're convinced on an intellectual level that utilitarianism is the truth about what you should care about. So you don't really care about the utility of conscious beings. Well, at the same time, there might be a psychological fact that says that's not all you care about. If someone analyzes your brain, you actually care about all these kinds of things. You care about being liked and you care about your own pleasure. Then because our language is so loosey goosey around this stuff, they could say, "I only care about utility. That's the one thing I care about." But you actually care about all this other stuff. Just on an empirical basis, I think it's actually easy to confuse ourselves. We're actually getting confused about what we care about. Any thoughts about what are the different things we can distinguish between there?

RONNY: One thing you touched on is: wanting to want something versus actually wanting it.

SPENCER: Right. Like I want to be the person that only cares about, let's say, utility in the world (utility maximization) but I'm not.

RONNY: Right.

SPENCER: I'm not saying that about myself. I'm just saying that this is a hypothetical person.

RONNY: Sure. I think it's not up to you what you care about. Sometimes you don't even know. I do think it's up to you what you endorse caring about or whatever you endorse preferring. Or whatever effective attitudes you can endorse or unendorse.

SPENCER: Can you unpack that? When you say you can endorse something as opposed to caring about it, what does that mean?

RONNY: That's a great question. I'm not super sure what it means. It means something like, you want to have the attitude and you want to want to have the attitude at some n of iterating wants.

SPENCER: So you can want something. Like, "I want to maximize utility in the world." You can want to want it, which is basically, "I think that it's desirable to have that set of wants." Then you can want to want to want it, which is maybe endorsing it?

RONNY: Yeah maybe. Or just all the way up.

SPENCER: Because endorsing it doesn't mean you have the low level ones, right?

RONNY: It means that at some level, you do want it. You might miss it for five steps but at the sixth step, you do want it then you endorse it. Also, if you want that at the 13th step, you want it.

SPENCER: You need to want it every level above that?

RONNY: Exactly.

SPENCER: So if you endorse something, there exists some level n at which you want it at that level and all levels above it. Really? Wow, that's intense.

RONNY: I just made that up right now. But I really don't know. Endorse is kind of a philosophical term of art. I'm sure a bunch of different philosophers mean different things by it. But it is normally used to mean something like that. More pre-theoretically, I like smoking but I don't endorse smoking. It's something like that. You can even have more sophisticated ones like, "I'm not okay with prostitution but I endorse being okay with prostitution."

SPENCER: That's really interesting. It reminds me of an article I once read. I don't remember where I read it exactly. But it drew these distinctions between, "I could enjoy smoking cigarettes when I smoked them but actually not feel a positive effect about them when I'm not smoking them and not desire to smoke them." So you could have all these buckets of, "Do you enjoy it or not, when you're doing it? Do you feel a positive effect or not around doing it when you're not doing it? Do you endorse doing it or not?" Then there are things that fall into every single one of those combinations. This just shows that there are all these different related mechanisms of desire versus enjoyment versus endorsement that are all actually distinct. Just to give the other side for a moment, what do you think are the strongest arguments you've heard in favor of the idea that we should only care about the experience of conscious beings? Because clearly, some people think that that's true.

RONNY: I do want to give you one more thought experiment that I always go through when I'm thinking about this. If I wanted to make a super strong chess playing computer that's super smart. If I designed it so that it's totally happy. Suppose it can go into its code and rewrite itself. If I designed it so that it's super happy believing that it's won trillion chess games in a row against the strongest players, I've actually made a really bad chess playing machine. Because it's just going to end up going into its code and just rewriting itself to believe that it won lots and lots of chess games.

SPENCER: It's so much more efficient than actually trying to win them.

RONNY: Exactly. This is a segue to the thing I thought might be an argument, which is that it just seems hard when you actually try to think about how you would design a machine to care about the goal that it's designed to care about and not care about. It's a model of that goal. It seems like a very difficult problem. If it turns out that people who are really smart and think a lot about AI said that this was something you couldn't really do, you can be like, "If it looks obviously like what you're doing is rewriting your beliefs, then don't do it." But if it turns out that there's no real good single formal way of expressing care about the state of the world — don't care about your model of the state of the world — then I would find that a pretty convincing argument for something that, at least rhymes with, you can only care about experiences.

SPENCER: Well, it seems like humans differentiate between, "I believe this thing" and "This thing is actually there in the world." But it's usually a question of, that idea that this thing is actually there in the world is that itself. So just a model but we just don't treat it like a model because we think of it as... I'm actually talking about reality. But is reality kind of a model in our own minds? I'm not saying that reality doesn't exist in the world. I do believe it does. I do think they're atoms or something out there. But our notion that there's a reality out there, maybe that itself is a model.

RONNY: Sure. I want to say one more version of this. This is one of my favorites. I think it's really funny. People will say that they only see experiences. But if experiences are anything, they're probably things that are in our brains and you're not seeing that. When I see the table, to see the experience, I would have to open up my skull and put some lights in there and use some cool camera. You get what I'm saying?

SPENCER: Yes. I think what you're saying is that the only thing that we can experience is this kind of simulation that our minds make that's sort of based on reality or is influenced by reality. Is that what you're getting at?

RONNY: No. I'm saying, when you look at a table, you're seeing the table. You're not seeing your experience with the table. Seeing an experience would be a very weird thing to see. You almost never see experiences. There's things that happen in people's brains, what you see is a table.

SPENCER: Well, I guess it depends what you mean by 'what you see is the table'. What you actually experience is some kind of simulation your brain makes. We call that 'seeing the table'. Is that what you're talking about?

RONNY: Maybe. I guess I'm saying that, seeing the table is being in a relationship with a thing that's a table and you're not standing in that relationship to any experience. You're having an experience of the table and that happens when you see a table. But you're not seeing an experience. That would be a very weird thing to see. You would have to cut up on somebody's skull and look inside their brain.

SPENCER: That makes sense. Let's switch to another topic where people often get confused, which is this idea of how normative discourse is problematic. Could you tell us a little bit about that?

RONNY: I am probably happy to say that I'm a pretty convinced moral abolitionist, for at least a lot of people. There's this practice that we all have. I'm gonna call it moral discourse, which is just the kind of talking that we do where we say, "Hey, that was wrong." Or, "That's a good thing to do." Or, "It's our obligation to do that. It's my obligation to do that." Or "That's superfluous." Or, "That's permissible." "That's impermissible." Basically, I think that this whole way of talking, when you try to make sense of what's actually being said or what the world would have to be like for that way of talking to make sense, it seems very unlikely that the world is anything like that. There are ways to try to vindicate moral discourse, even if the world is nothing like (at first glance) it seems like it would have to be for saying things like, "It's wrong to kill babies," to make sense. But I don't think they work out very well in the end.

SPENCER: What would the world have to be like to be able to make these moral statements? Which I assume most includes things like, "You should do that," but in the moral sense of shit.

RONNY: Yeah, something like that. An example I like is "Killing babies is wrong,"or, "X is wrong." At first pass, you would have to think that — and this is very much a first pass — there's this property that certain kinds of events or actions have or states of the world have that for some reason, it's just intrinsically, regardless of what anybody cares about or thinks about it, makes it worth avoiding. Maybe worth avoiding to everyone. Something like, you've made a mistake if you don't care about one of these wrong things happening or one of these good things happening. They have some property that's independent of anybody's mental attitudes towards it.

SPENCER: So even if there was no human alive or conscious being alive in the whole universe that could witness that event, somehow that event still has this property that it's bad, right?

RONNY: Yeah. The reason I say that is, if you look at the sentence, "It is wrong to kill babies." It's very different from a sentence, "I dislike killing babies." Or, "I think people who kill babies are really terrible." Those sentences are actually fine. If I say, "I really just like killing babies," and you're like, "I disagree." That's a weird thing to disagree about. You're disagreeing about what I'm into. But if I say, "It's really wrong to kill babies," and you disagree, that's a fine thing to disagree about. We could go on arguing about it. It seems like there should be something that settles our disagreement. Maybe another way to put what I'm saying is that I'm not sure that there is any way to settle disagreements like that. Maybe that means they're not even disagreements at all.

SPENCER: Well, I think some people will interpret that moral language as actually saying something else that's maybe less mysterious. Like, maybe "Killing bad", or just an expression of emotion or a command, like "Don't kill babies." What do you think about those kinds of ways of trying to get out of this?

RONNY: "Boo, baby killing." It's very important that it's not a statement. It's not something that can be true or false.

SPENCER: That's not how people seem to treat moral statements. They often will seem to argue about them.

RONNY: Yeah, they argue about them. One of the important things is that it seems like, if I think X is wrong and you think X is permissible, then there's something we disagree about and only one of us can be right.

SPENCER: I think something that bothers me about this discussion of metaethics is that it goes back to the 'Are we doing philosophy? Are we doing psychology?' thing. Are we making a claim about what people are actually intending to do or the reasons behind what they're doing when they say things "It's wrong to kill babies," or we think something else?

RONNY: There's definitely two separate things here. One of them is a much more empirical question, which is "What's the semantics of moral claims"

SPENCER: Could you elaborate what that means?

RONNY: Just what do people actually mean when people say, "Killing babies is wrong." Do they really mean the same thing? It's just, "Boo killing babies"? One really crude silly experiment that you could do for this, you could just ask a bunch of people: Here are two pieces of speech. Do they mean the same thing? One of them is, "Killing babies is wrong." The other one is, "Boo baby killing." I bet you people will say they mean different things.

SPENCER: I'm almost certain I will. Most people will say different things.

RONNY: That's a very crude experiment, obviously. It could be that there's some more sophisticated way that this is working out, where it turns out that they mean the same thing. There is definitely that question. But then there's another question of, given what (in fact) people do mean when they make moral claims, is the world a way that supports those claims being sensible, that supports them being true or false, or that supports them mostly being true or false? There's other things that you can have. You can have something that supports disagreement about moral claims, even if they're not necessarily claims or do not claim true or false. I think that's not going to end up panning out.

SPENCER: Got it. So then, what do you want us to do? You're saying you think that there's a problem with the way we use moral language, that we treat it as though it's a factual thing. But if you really dig into this, it's hard to make sense of a way to interpret it that would actually be factual that also accords how people think they're using moral language. What do you want to do about that?

RONNY: This is a great question. I'm not super sure. There's a couple of options. Super ideally, in a world where assuming that I'm right because I could be totally wrong about all of this. But assuming that I'm right, it seems to me like sometimes moral discourse makes it very difficult to do moral trade. Say, I don't care about animal welfare but I really care about the poor. Somebody else doesn't really care about the poor that much but they really care about animal welfare. We can make trades or we can be like, "Well, that's something they're into and this is something I'm into." There's some way to... You can get more getting along with each other. You can also just get more personal, selfish, practical benefits just by trading with other people.

SPENCER: Because you don't view them as wrong?

RONNY: Right. You might still really dislike it. But you don't think that they're making some sort of fundamental mistake. Because if both people think that the other one is making a mistake, then what you really should be doing is either continuing to argue or fighting the evil people who disagree with you. But at least we're gonna keep arguing because one of us is wrong. Probably we'll figure out who. We're both smart people. One option is that we could do a lot more moral trade. I'm not sure that that's going to work out for everybody. Another option is just keep doing moral discourse for almost everybody. Then for a very small group of people, have them do moral trade. Another consequence of moral discourse is by its nature, it distracts us from the things that we actually care about. A good example is the thing we were talking about earlier. People get arguments that the only thing that you care about are people's experiences. They in fact care about a lot of other stuff. I see people, unfortunately, sacrificing a lot of stuff that they care about, in order to get more of the thing that they think they're supposed to care about or the only thing they think they're allowed to care about.

SPENCER: What's an example of that?

RONNY: Not being honest with people. Or asking people to not be honest with them so that they can hear more pleasant things. Or not being honest with people because the only thing that they should care about is their experiences and they shouldn't care about what's honestly going on. I think this can be right. It can be that you care a lot about your experiences with something and you don't care that much about knowing the truth about a certain situation. But if what's going on is you think you can't care about the truth at all and you should only care about the experiences, I think that's probably a mistake. Probably in fact, you care a bit about the truth and also somewhat about the experience.

SPENCER: I have a certain approach to how I think about moral trade and moral judgment. I'm curious to hear your reaction to it, if I can describe it to you. Basically, the way I think about it is, there's some things I intrinsically value like my own happiness, the happiness of conscious beings around the world, people believing in true things and not false things, etc. They're different things that other people value. Sometimes those things are things that actually I disvalue. For example, some people might think if someone is impure then they should be punished. Like maybe according to their religious beliefs, they think that's true. I would say I actually disvalue that because I think that causes needless suffering and doesn't actually create any value. I would go in and not say that they're wrong but I might oppose them. I might try to stop them from doing what I think of as the thing that creates disvalue. But without an actual disagreement saying, "You're wrong about that." It's just like, "I value x and you value not x and there's a fundamental tension." On other topics, there might be things that they want to do that I don't think create disvalue. I'm just neutral towards them. They're like, "I think we should maybe have more prayer to this particular God that we worship," and I'm like, "I don't have a problem with that. There isn't anything bad about that." Maybe I don't think that's valuable. I don't believe that that particular God exists. In that case, there's opportunities for moral trade. Unlike in the first example where there's a direct zero sum trade off. Here, you want more of that and I want more of this other thing then maybe there could be cooperation around that. Curious to hear your reaction.

RONNY: If someone literally values the exact negative of what I value, then we're not gonna be able to trade. At least not on that quantity. But I could imagine situations where somebody really wants to punish someone for impurity or they really want to punish me for impurity. I say, "Become a vegetarian. I'll let you punish me once an hour a week for being impure." Which I know sounds disgusting or distasteful to a lot of folks. But to me, it seems like a pretty good idea. If we can both get more of what we want the world to be like out of the trade, then it seems good to me.

SPENCER: I actually had a friend of mine who did what I thought was a very generous thing for my birthday one year where she called me up and said, "I'm going to avoid eating these foods that you think are unethical to eat but only on the condition that you eat them." Because she knew that I enjoyed eating them but just avoided it because it was unethical.

RONNY: That's the sweetest thing I've ever heard.

SPENCER: I know. She's really kind. She even did it at a two to one ratio so that she would give up twice as much as I was going to do because she intuited correctly that at that one to one ratio, I might not be comfortable with that. It was [unclear]. That was really cool.

RONNY: Did you take her up on it?

SPENCER: I did. I took her up on it.

RONNY: You can make a lot of sense of why we would have ended up with this way of talking about what's important. If you think about our evolutionary history, it's a coordination mechanism. If there's a fact about what things are good and what things are bad, then when we disagree, we can keep talking. One of us is wrong so maybe we can convince the other one to see the truth. But if the way that we talk about, it's just, "I really like the Sun God," and other persons like, "I don't like the Sun God at all." Then there's really not that much more that we can do from there. We could fight or we could try trading. But if it's a zero sum conflict, that's going to be pretty tough.

SPENCER: Well, I think one way to try to understand morality is to take on the evolutionary lens. I don't think that's the only way to understand it. But I think it's a useful lens to at least temporarily put on. If you put that lens on from that perspective, it seems as though morality is this way of creating cohesion in a group where the group acts in such a manner that you don't harm the other group members. Not only do you think it's bad to eat dead bodies but you think it's bad for others to eat dead bodies and you think that the whole group should punish eating of dead bodies. Now it allows the whole group to coordinate on, "Let's not eat dead bodies." Of course, eating dead bodies can spread disease. So that actually benefits the whole group. There's many examples of that. But because we're able to be programmed by culture, it's not all built into us. It's not like all the answers to what we consider immoral are prebuilt. In fact, it's constantly changing through cultural evolution and so on. It drifts in many different directions over time.

RONNY: Knowing that almost all animals are rapid discounters, by which I mean they forgo very large benefits later for much smaller benefits now, it doesn't take a very long time for $1,000 to seem not as good as $10. Or 1000 apples to seem like, that's not that good compared to 10 apples tomorrow.

SPENCER: Right, 1000 apples in five years is not that appealing compared to 10 apples tomorrow.

RONNY: It happens very quickly. Knowing that, you can see morality as something that would have been very useful as a way of keeping our preferences stable across time, especially knowing that there's third party punishers out there. So if the chieftain is not paying attention and his mate is around, I can sleep with her even if I know that later on that I'm going to get caught and punished. Because I'm a rapid discounter. That's not going to stop me from sleeping with her. But if I can have something in my own head that's telling me, "You're bad if you do that." If I can have something in my own head that's punishing me ahead of time, that seems pretty good. I think morality can serve that purpose and in people's own heads. It can make them think. Basically it can shame us into being able to cooperate better even though we're going to be super rapid discounters. It's a way for me to punish myself without having to be punished by third party punishers or other punishers in the group I'm in.

SPENCER: That's really interesting. Good point.

[promo]

SPENCER: We've talked a lot today about philosophy and different ways of looking at morality. I think one counter argument people can make against philosophy as a discipline is, "Well, aren't we still discussing the same topics that were discussed 1000 or 2000 years ago? Does philosophy actually make progress into understanding the universe or the mind or whatever it is attempting to understand?" What are your thoughts on that?

RONNY: I think that's a pretty complicated question. I actually thought about this a little bit since we talked about it briefly, not too long ago. There's actually a pretty cool analogy that I wanted to draw to this. There's this paradigm for making AI safe. I think there's a really interesting analogy that I want to draw between those. Before I try to do that, let me give an answer first which is, I do think that philosophy makes progress. But the way that it makes progress, most of the time, it's not by answering, it's not by settling questions, but it's adding concepts and arguments and tools to the repertoire of the conversation. You can't write a new paper on realism in metaethics and totally ignore everybody who's written about it before, totally ignore Simon Blackburn on whatever quasi realism stuff he's writing about. When people add new arguments and new distinctions to the repertoire, you can't ignore them anymore. If they make it. Sometimes they don't make it. Sometimes people try to make new arguments and it's shun very quickly that they fail or that there was some mistake. But if it's not mistaken and it seems like a genuinely difficult argument that's not going to be put down very easily, then it makes it so that you can no longer ignore it.

SPENCER: It seems, though, that someone could respond to that, "Okay, we're adding these new arguments. If you really want to make progress, you've taken that into account when you're making your arguments. But that in itself doesn't show that we're actually getting any answers. So what's the purpose of that stuff if we don't get to the answers?"

RONNY: Even if we don't settle the answers, I do think that we shift around the reasonable credences.

SPENCER: So we become more confident in some explanations of things and less than others?

RONNY: Right. I think part of the thing that makes it difficult is that philosophy is mostly about really fundamental assumptions. Two things happen. One thing that happens is that we narrow down the sets of assumptions that you can have together. You can no longer have Assumption A and Assumption B at the same time because they lead to really bad problems, it turns out. But you can still have assumption A and you can still have assumption B. It'll look like philosophers will disagree a lot about stuff. But you can infer more so now from their answers to one question and their answers to other questions. We at least find out stuff about what views are consistent with each other.

SPENCER: That does seem to change the probability mass because it basically eliminates some views you can have.

RONNY: The complication is that things like views also get more sophisticated in some hard to describe way. They morph over time a bit.

SPENCER: Is that because they're trying to avoid certain objections?

RONY: Yeah. Something like that but it's normally things like... I don't know how to describe this. They weren't taking a position on it before but now they have to take a position on it.

SPENCER: There's some nuance so you have to actually say the position in more detail, to take a position on some more detailed case that wasn't even being considered?

RONNY: Something like that.

SPENCER: Do you think that most philosophers view themselves as in the enterprise of trying to figure out the truth about the universe?

RONNY: Another thing I should say is that, I don't even know what philosophy is anymore. There's so many sub-disciplines of philosophy. Maybe they're united by the use of argument and the analysis of argument. Maybe that's what philosophy is. Maybe it's the use of arguments about stuff that isn't purely definitional. But definitely philosophers of physics are people who work in the foundations of physics. Who are often trained as philosophers, think of themselves as trying to get to fundamental truths about the universe. I think that people who work in philosophy of language, they seem to pretty much think of themselves as empirical scientists who are building models of different fragments of natural language. Metaphysicists, they generally don't think of themselves as empirical scientists, but they could think of themselves as people who are studying empirical phenomena like moral discourse and what's going on there.

SPENCER: My understanding is that a lot of philosophers reject the idea that they're doing empirical work or are studying just what humans do or what humans think or humans say.

RONNY: I think that's less true in philosophy of language these days. I know people working in the foundations of physics think empirical things are definitely relevant to what we're doing.

SPENCER: What do we make of the large amount of disagreement between philosophers? If you look at the PhilPapers Survey by Chalmers, there's a huge disagreement. Is that because the interesting stuff is the stuff that people disagree on? Is there a bunch of stuff philosophers agree on and then, but it's just not worth putting in a survey because it's boring?

RONNY: I'll be honest with you. I'm not sure. There's definitely a bunch of stuff that philosophers agree on. Like that humans generally have two feet and the trees are often taller than 10 feet long. I have a hard time thinking of something that no philosopher disagrees about. I think unfortunately, there are incentives in the field, where it's a good way to make a name for yourself to find something that's obviously true then make the best possible argument against it. You can become the philosopher that disagrees that one plus one equals two.

SPENCER: Then you have our niche carved out and you can publish your papers.

RONNY: Yeah. Unfortunately, there are incentives like that.

SPENCER: Just to give some ideas for those who haven't seen this, The PhilPapers Surveys. They will ask questions like abstract objects, platonism or nominalism. Then it's like 39% lean towards platonism, 38% say they lean towards nominalism then there's 23% say 'Other'. These are surveys of professional philosophers, I believe. Although I think you can cut it up by different specialties. Another example would be knowledge: empiricism or rationalism, 35% will say empiricism, 28% will say rationalism, and then about 37% will say 'Other'. That just gives us a sense that there's this widespread disagreement about many of the most important topics.

RONNY: Hearing you phrase the questions like that, just seems like rationalism and empiricism... I bet you that if we interviewed the people who answered those questions and asked them what they meant, a lot of the time what they said wouldn't sound consistent with each other.

SPENCER: So maybe it's partly just how do you even define these things precisely enough that we can even ask people questions about them?

RONNY: I think it probably just is true that maybe there's a lot of disagreement in philosophy because it's really hard.

SPENCER: Well, I think it is hard. A lot of the hardest stuff gets pushed into philosophy. Because we don't have more obvious ways to work on it. We can't solve it by engineering. We can't solve it by math. We can't solve it by checking something in the world. So it's like, "Okay, then it's philosophy."

RONNY: I want to quickly try to draw this analogy for you that I talked about earlier on. There's this idea for how to make useful AI systems. The idea is you have them debate with each other. In this framework, the way that they analyze debates is as a game tree. You imagine there's a question, and then both systems give their best answer to the question. Then you imagine that they're going to have a debate. At the end of the debate, it's going to be judged by a human judge.

SPENCER: An example would be, is this thing a picture of a puppy or not? Right?

RONNY: Totally. Then they have a debate about it. Then the hope is that a judge looking at just small parts of that debate, towards the end of the debate, will do better at figuring out whether it is a picture of a puppy than they would have by just trying to answer the question themselves.

SPENCER: In this case, we assume they can't see the picture of the puppy. Because otherwise, that'd be obvious. But let's say they can't see it. They're just hearing the debate.

RONNY: What I've been thinking is that maybe you can think of the progress of philosophy as something like that. I did some work as a subject for Open AI, which is the group that's working on this stuff. I was just really impressed by how well judges were able to find the answers to pretty difficult questions. One of them was the Monty Hall problem. They were really good and these were people who didn't know anything about the Monty Hall problem. It's a puzzle in probability theory. They didn't know anything about probability theory but they did really well at being able to figure out who was the person who was giving the right answer. So maybe part of what's going on in philosophy is that people are just running down the game tree of the debate. They're giving arguments and giving counter arguments. Hopefully, it's easier to tell what the right answer was by looking at what the arguments are after 2000 years than by trying to start from scratch and answer the question yourself.

SPENCER: That's a nice way to put it. Although I would say still, it's not that easy to evaluate.

RONNY: That seems right.

SPENCER: I would also add one other thing that I think philosophers do, which is that they do seem to come to a pretty strong consensus that certain ideas are deeply flawed to the point that they should be abandoned.

RONNY: I actually think one of the pretty big consensus amongst philosophers (although I haven't looked at PhilPapers but I get the impression), most philosophers would say that you can care — not only that you can, they would say something stronger than what I was saying — They would say that you should care about lots of things besides people's experiences. You should care about being honest or being virtuous or fulfilling your duty or all sorts of stuff besides what people's experiences are like.

SPENCER: It's mainly non-philosophers who are making the argument that that's all you can care about?

RONNY: I would say mostly. Of course, there's still some philosophers who make saying that their whole gig but they are definitely a lot rarer.

SPENCER: Do you worry that for some philosophers, it's basically a game and it's not about reality anymore?

RONNY: Do I worry about it? I don't think I worry about it. Do I think it's true?

SPENCER: Spoken like a true philosopher. [laughs]

RONNY: [laughs] Let's see. I definitely don't think that anybody consciously thinks that. At least not my professors at my graduate program. It sure seems like they're definitely trying. In particular, philosophers of language seem to have pretty good epistemics to me. They do empirical work. They look at data and they make formal models. They try to figure stuff out. That's also been my experience with people working in foundations of physics. I think there's an attitude amongst academic philosophers. It's like, if you think something has 10% probability then you should definitely write a paper for arguing that it's true.

SPENCER: Just saying, if it seems incredibly surprising that it would be true, that even 10% chance is enough to say that maybe this thing is true that everyone else is dismissing. Right?

RONNY: Right. Can I give good enough arguments for it? I think maybe something that philosophers do is — I think there's something noble about this —- they're more interested in how good the arguments are. They're not as interested in whether they think that conclusion is true. The rule for "Am I gonna write a paper and try to publish it?" is how good are the arguments that I came up with. It's not how likely I think the conclusion is true.

SPENCER: Is that just because you don't want to let your prior odds come into the analysis?

RONNY: I think the real reason is probably just because it makes it easier to publish.

SPENCER: Well if you have strong arguments and it seems actually totally implausible a priori, that's maybe better. Because it's actually much more interesting that way than if you're like, "It's probably true just a priori and also I have a good argument." That's not as exciting.

RONNY: I think there is something good and right about just paying attention to how strong the arguments are and not worrying too much about whether the conclusion seems plausible. But I think if you talk to a lot of philosophers and you ask them, "How many of the theses that you defended in your papers do you think are true?" I wouldn't be surprised if a lot of them said less than 10%.

SPENCER: Hearing a debate of Julia Galef with a philosopher really changed my mind into thinking that philosophers use intuition a lot more than I previously realized. She had a bunch of quotes from philosophers saying and talking about how they use intuition and things like that. I found it pretty persuasive. I'm curious to hear your thoughts on what you view as the role of intuition in philosophy?

RONNY: I'm not a big fan of intuition in philosophy. But let's see. Do I have intuitions in philosophy? I think I try to not use intuitions when I'm doing philosophy.

SPENCER: Do you see it used by many others?

RONNY: I know that some people explicitly say that certain kinds of knowledge are grounded in intuition. Michael Huemer says very openly, he calls himself an intuitionist about moral knowledge. He pretty openly will use intuition as the grounds for some broad moral claim like it's wrong to harm beings. Something I do see happening is someone will say that something's just preposterous or it's just absurd. Then if you push them on why, a lot of the time, I tend to think it's probably there just because it's intuitively absurd, but they don't want to admit that.

SPENCER: I see. But you don't want to accept that as an argument? So they hide behind the absurdity of it.

RONNY: I do think that, in fact, people do rely on their intuitions all the time to figure out whether something seems reasonable or not. My philosophical method — and not that that's anything to take too seriously — but I try to not let my intuitions influence my conclusions or arguments when I'm trying to do philosophy.

SPENCER: Well, let me just give you a couple of examples where I see intuition coming in. Maybe I'm wrong about this. One example would be in the Mary's Room argument, where Mary's a color scientist who knows everything there is to know about color but she's colorblind. So she's never seen the color red. Then one day she gets surgery so she can see the color red. The intuition is that she must have suddenly learned something new about color that she didn't know before by being able to see it that she couldn't have known just by studying it. Isn't that fundamentally the kind of an intuitive argument? Or maybe I'm mistaken?

RONNY: That seems right. I tend to think of people's intuitions as getting in the way when I'm talking to people about philosophy. I tend to think my intuitions are getting in the way when I'm talking to people about philosophy. Daniel Dennett has this concept of intuition pumps. You could definitely see that a lot of the thought experiments philosophers do is trying to be ways to get people's intuitions to swing one way or the other. But I'm not too convinced that people's intuitions being able to swing one way or the other is very good evidence of the things that we're interested in finding out about. I don't think that the Mary's Room thought experiment is very good evidence about what the nature of qualia really is or what's going on there.

SPENCER: I see. Well, another example comes to mind is that famous thought experiment of, imagine you wake up one day and there's a violinist who's hooked into your body and your organs are keeping the violinist alive. That's used to discuss the topic of abortion and a woman's right to choose and everything. I think it has a strong emotional appeal to a lot of people. But it sounds like you're saying, while it might be persuasive, it's not necessarily evidence in the kind that you would want to use in your philosophical thinking.

RONNY: I mean, for that sort of thing. I'm not even sure if there is a way to settle those questions about responsibility, what's your obligation, what's permissible, and what's not permissible. In those cases, I think it really just is — I'm making a substantive claim here that many people would disagree with — But I think it really actually just is trying to be persuasive. I think that's the whole point.

SPENCER: Got it. Just before we leave the topic of philosophy, why study philosophy? Why should people consider doing that?

RONNY: I want to go back to saying that there's so many different kinds of philosophy that it's a hard question to answer in the first place.

SPENCER: But let's say, why study philosophy rather than chemistry or finance?

RONNY: Well, I'm not sure. I would probably advise someone to study economics before studying (I'm not sure) about finance. I would probably advise somebody to study economics before studying philosophy, most of the time. One great reason to study philosophy is that it's really fun. I think that's a really good reason to study philosophy. If you're super interested in it and you're super curious about it, I think that's a great reason to do it. Especially if you're the sort of person who's been asking related questions your whole life. I think it can be really fulfilling to get into it with people who have also been doing that and seeing what it's like. I do think there are benefits to studying philosophy. Most of those benefits like getting good at thinking about arguments, getting good at making really subtle distinctions and keeping them in your head, things like that, can be really useful. But I also think you can get those skills from other fields. I have a hard time trying to come up with a justification for studying philosophy other than just being intrinsically interested in it. Although, if you're particularly trying to solve some particular kind of problem, I think it makes sense to find philosophers who have thought about problems related to it. I don't remember this paper super well but I remember being super impressed by it. There's this paper by Elliot Silver that's about how difficult it is to actually make causal comparisons between heritability and environment. I remember just being really convinced that the way that we normally think about... It seems like a lot of the time we talk about that as if, "Maybe it's 20% environment and 80% heritable. Or maybe it's 10% environment and 90% heritable." We think of that as if it's like causal contributions to what the final feature of the organism is. He makes really convincing arguments that that's actually pretty confused. The best you can do is think of it as how much of a difference each makes to the outcome, not how much each one is causally contributing.

SPENCER: That hits on what I think of is what's most exciting to me about philosophy which is adding it on top of other things. Let's say you're a psychologist studying personality, I think you could benefit a lot from using some philosophical style analysis or maybe even getting a philosopher involved to help you think about: What exactly am I setting? What do I mean by personality? What are the different possibilities there? What are the disambiguations?

RONNY: That seems legit. I also think philosophers will be really good at coming up with really good questions or really good frames for what's the really interesting question. Something like that is something philosophers tend to be good at. I also think there's a lot of cause to studying philosophy, or there can be. I've noticed that you get trained in philosophy to be really good at arguing for any position. I'm not sure that that makes you better at figuring out what the right position is.

SPENCER: It's like a debate where in the debate, you're just assigned a side, and you learn that you can argue for anything. Which is not necessarily the healthiest.

RONNY: I was recently surprised by how well people judging debates did on these open AI experiments. It seems like they were really good at figuring out what the right position was. They did it by judging debates so that they've updated me on the value of debate.

SPENCER: Well, I think that's really cool. But I'm concerned about a failure mode which is that there are styles of debating that are very convincing to the human mind or don't have much to do with the truth. Maybe if you could somehow limit it to not persuasive style debates, like more factual debates, maybe that would help. What do you think about that?

RONNY: That seems right. But my dear, at some point in these experiments, what I was doing is I was trying my darndest. Sometimes I would do really well, but I was trying my darndest to be dishonest about the answer to a particular math problem and give the best arguments I could and be as confusing as possible. The judges would still be able to figure out what the right answer was.

SPENCER: You were playing the side of the misaligned AI trying to make bad arguments to convince people is that right?

RONNY: Exactly.

SPENCER: Maybe you're much more of a philosopher than you are a car salesman. Maybe if you are more of a car salesman, you would have done better.

RONNY: That could be right. Maybe they should be hiring more car salesmen. [laughs]

SPENCER: [laughs] You might actually be more effective. Well, apparently, I heard that Cialdini, who wrote the famous persuasion book, actually studied car salesmen. Those types of people who are just using applied persuasion that they (in many cases) just figured out on their own.

RONNY: Main point being, I do worry about philosophy training people to basically be really good at arguing for any position and that combined with the incentives for arguing for interesting positions which you can pretty much take to be positions that are initially really implausible but you can make them seem plausible. That seems like not the best habit to get into if your goal is figuring out what the right position is.

[promo]

SPENCER: For the final topic, I want to talk to you about what we can do to try to create more systematic wisdom or rationality in the world. Love to hear some of your thoughts on that.

RONNY: Well one of the things that inspired me — I did briefly try to work on this but I ended up burning out on it a bit... Not briefly. I worked on it for a bunch of months but I ended up burning out on it a bit. — I was super inspired by these open AI style experiments on debate. What happened to me was I did one of the early experiments. It was this puzzle over some physics thing having to do with ice in different tubs and where the water line ends up after the ice melts. What ended up happening is, I read the debate. First I gave my probabilities to each of the answers to the puzzle. Then I read the debate and I gave new probabilities. The debate actually ended up convincing me of the wrong answer. I was pretty embarrassed about that. I thought "Wow, how often has it happened that I've read a debate and turned in the wrong direction?" It just seemed like this was a pretty real test of my rationality. It's something at least very closely related to rationality to me. It seems like after you read the debate, do you end up believing the right position more or less? That seems like a really important question. Open AI was doing empirical work on this so that at least seems like a good test to me. It does seem to me like in general, we could do more empirical work on trying to find interventions to make people better at figuring out the truth through conversation or by reading conversation or other things.

SPENCER: Why do you think that topic is important?

RONNY: I think there's a couple of reasons it's important. I also think there's a couple of reasons it could be dangerous. But my current position on that is it's a benefit if we do something like that and we get it right. One of the reasons I think is important, one thing that happens a lot, specifically, the goal of making people better at disagreeing about things. When I say better at disagreeing about things, after the disagreement, both parties are more confident of the truth and the word beforehand. Michael Huemer has this great thing where he imagines that if there was a meteor coming to Earth, scientists found it, it's coming in three years. What ends up happening is, one group of people would say, "Of course, there's a meteor coming." Another group would say, "Oh, no, there's no meteor coming in." It would become a politicized problem. There would be factions about how much to place into the meteor. There'd be all sorts of people trying to stop money from going into stopping the meteor. It just seems like if we could get people in general to be better at disagreeing, that would be really helpful. Also, if you could get people who are working on really difficult questions, maybe people working on ethics, or people working on aligning AI, or people working on stopping meteors from crashing into the earth to be better at disagreeing about questions and figuring out what's right from disagreements, that could be really useful. I also think it could be dangerous because if we can get people who are doing bad stuff to be better at figuring out stuff from disagreements, it could have downsides. But on the whole, I think it would be a benefit.

SPENCER: I agree it could have downsides. But it seems like one of the things that's asymmetrically pushing towards good things more than bad things.

RONNY: I agree. The reason I mentioned the bad sides is because I have a friend who I trust. When I told him that I was working on this, he reached out to me and told me that they were worried about the downsides. I thought they made pretty persuasive arguments. Although in the end, I ended up concluding pretty much that the benefits outweigh it. One specific example of this that I've talked to you about before, it was something that I was working on, one thing you could do is start off by giving people puzzles and having them say which answers of the puzzle they think is most likely to be true. Then pair people who disagree. Have them talk to each other. Then have them give answers to the puzzle again. You can see from this, roughly how good are people at figuring out the answer to this puzzle by having disagreements about it. Then what you can do is you can design different kinds of interventions. Like training programs, or having some sort of moderator, or all sorts of protocols that you could try to train people in. Then you could see if people with these interventions do better as a result of fun at figuring out the answers to this puzzle through disagreements. It sure seems like, if we could find some really strong interventions or even just moderately strong interventions that would make people better at disagreeing, I know I would use them all the time. I think a lot of people would use them.

SPENCER: I love that idea. I hope that you'll pick that back up again or that someone else will pick up on your work.

RONNY: If anybody listening to this wants to work with me on it, or just wants to get what I've worked on it... I have most of the infrastructure for it pretty much done. So if anybody wants to work on that with me, I'm happy to work with collaborators who are happy to help people out with what I already have, who want to try to do it on their own.

SPENCER: That's really cool. I'll also just say the reason that this topic is so important to me: of how to make people systematically wiser, more rational, is for two reasons fundamentally. The first is that I think on a day-to-day basis, there are many ways that our decisions and the way we choose what to do and the way we choose what to believe could be improved. Then it would just, on an individual basis, make our lives better. I think there's a lot of room for improvement for us to be happier and have greater well-being and achieve our goals more. But then potentially even bigger thing is that I'm just extremely concerned that society is going to drive itself off a cliff. That essentially we're growing powerful in the sense of having powerful technologies faster than we're growing wise and that this (basically) increasingly throws humanity into precarious situations. Whether it's nuclear weapons, or super intelligent AI, or climate change disasters, or just social media addiction and AI algorithms that predict what you're going to click on and put you into your own filter bubble. There's just so many of these brewing issues that I don't see us growing wise fast enough to handle them. Any thoughts on that?

RONNY: I really liked that frame. I think it's difficult because I'm not sure that... For instance, would I want to increase everyone's general intelligence? Would I want to increase everybody's IQ score by a standard deviation? I'm not sure that doing that would actually make us safer. I think it is difficult to figure out the kind of wisdom that we're looking for here that would make us less likely to destroy ourselves.

SPENCER: It seems like there's some kind of wisdom that we're talking about that would make us less likely to destroy yourselves but it's hard to pinpoint exactly what that means. It probably has something to do with wisdom related to getting clear on what we're trying to achieve and what the kind of trade offs are. But also something we'd want to mix in with compassion. So it's not just, "I'm gonna get wiser so that I can beat your group." It's like, "I'm gonna get wiser so that we as humanity or as all conscious beings can thrive."

RONNY: There are things in the vicinity that seemed like they could be mostly dangerous. If we got a lot better at building nuclear weapons and at making AI systems really quickly, without getting a lot better at deciding when to use nuclear weapons and when to deploy AI systems, that seems like a negative to me. It seems really hard to think about what sorts of interventions are going to mostly have positive effects and what interventions are going to... It seems possibly difficult. Seems like something you could mess up trying to get out the right kinds of interventions.

SPENCER: Well, just to point to a few key areas I feel are specially ripe for improving our wisdom, one is widespread cooperation. If you think about nuclear weapons, you can't have a unilateral solution where one country solves the problem. You need to figure out ways to get groups to collaborate. Even groups that are potentially adversarial in a number of different dimensions. Climate change could be another example where we really need widespread cooperation in order to make things happen. Another is wisdom in the form of trying to figure out what's true about important topics like, can we even agree on whether climate change is occurring? If we can't actually come to an agreement on that, it's very hard to deal with it. Obviously, if it's not happening, there's nothing to deal with. But if it is, there's something to deal with. We need to be able to come to a consensus. The idea that we could all look at the evidence and eventually come to a consensus seems super important to me.

RONNY: I agree. I think there's a big problem of big issues in our society becoming politicized and that makes it much harder to think about them. I think it'd be great if we could figure out some way to make people better at disagreeing with each other.

SPENCER: Thanks, Ronny. It's really great to have you on.

RONNY: Great talking with you, Spencer.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: