CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 188: Effectively encouraging people to give more (with Josh Greene)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

December 16, 2023

How can people be encouraged in ways that are more natural and less manipulative to increase the amounts they give to charities? Why are arguments based on the effectiveness of charitable organizations less compelling to most people than we'd like for them to be? What percentages of a social group should be "doves", "hawks", "eagles", or something else? To what extent should our knowledge about our evolutionary history shape our values? Why are children more likely than adults to engage in prosocial behaviors towards strangers? Aside from anecdotal evidence, how do we know that political polarization in the US has been increasing over the last few decades? How can bridges of respect and trust be built between warring political tribes? How can people even begin to undertake the project of building bridges across political divides if they have no interest in understanding or engaging with the other side — especially if they believe that the other side is completely deranged, evil, or otherwise unfit to govern at any level? What is "deep pragmatism"? What might a "psychologically-informed" version of utilitarianism look like?

Josh Greene is Professor of Psychology and a member of the Center for Brain Science faculty at Harvard University. Much of his research has focused on the psychology and neuroscience of moral judgment, examining the interplay between emotion and reason in moral dilemmas. His more recent work studies critical features of individual and collective intelligence. His current neuroscientific research examines how the brain combines concepts to form thoughts and how thoughts are manipulated in reasoning and imagination. His current behavioral research examines strategies for improving social decision-making and alleviating intergroup conflict. He is also the author of Moral Tribes: Emotion, Reason, and the Gap Between Us and Them. Learn more about him at his website, joshua-greene.net.

Further reading:

JOSH C: Hello, and welcome to Clearer Thinking, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Josh Greene about incentivizing charitable giving, defusing political animosity, and moral emotions and cooperation.

SPENCER: Josh, welcome.

JOSH G: Oh, delighted to be here. Thanks for having me on.

SPENCER: I've been hearing about your work for many years, so I'm really excited for this conversation. One thing I really appreciate about your work is not only do you study things academically, but you actually build things, and you put them in the world that accomplish goals. So I'm excited to talk about some of those projects as well. Let's start with Giving Multiplier, which I think is a really fascinating approach to trying to get people to do more good with their charitable donations.

JOSH G: Yeah. Well, thanks. Actually, this "getting stuff out in the world" is kind of a new thing for me, but it's the stuff that I'm really excited about, and I'm excited to talk to you about all that stuff, or at least starting with Giving Multipliers. So this was our attempt — and when I say our, this was all done really led by Lucius Caviola; so that's what I mean when I say we — to think about how we can encourage people to give more effectively with their charitable donations, and in a way that feels natural and friendly. And so a bit of history on this, I long ago was convinced by the philosopher Peter Singer, that those of us who are fortunate to have some disposable income should be using it to help people who are in great need of food, medicine, etc., often people thousands of miles away. And I was convinced in a kind of philosophical way with his famous drowning-child example. If there was a child who was drowning right in front of you, but it would cost you your suit to dive in and save the kid, you would still say that you have an obligation to save the child, even if it's going to muddy up your suit, and you're going to lose that money. And Singer argued that that's the situation that we're essentially in now, as long as there are children on the other side of the world drowning in poverty, so to speak. And I did some experiments — some of them unpublished, one of them at least published — trying to sort of use that kind of argument to convince people. And what we found is that it didn't do very much, although it had some effects. And more recently, Lucius and I thought, "What if instead of saying, Hey, you really should do more, and instead of supporting the charities that you like, let's say, it's the local animal shelter or something like that, you should be doing these other things that are more effective." And by more effective, I mean, a lot more effective. I know that this is stuff that you've covered on your podcast before but briefly, organizations like GiveWell do research to figure out how many lives can you save for $10,000, let's say. And charities that distribute malaria nets, or charities that encourage women to have their young infants vaccinated do an incredible job of saving many lives per dollar. And we wanted to encourage people to support charities like that. And our insight was that this could go better if instead of saying, "Don't do what you feel like doing, do this other thing." We said, "Well, why don't you do both?" So we ran some experiments where we tried giving people two versions of the choice. One where they're asked to pick their favorite charity, and then we say, "Okay, we're giving you some money to work with. You can give it all to your favorite charity or all to this other charity that experts recommend that is super effective." And what we found was that very few people in that condition, which is the control condition, chose the super effective charity. But if we gave people a third option and said, "You can give it all to the one you picked, or all to the one that our experts are recommending, or you can do a 50:50 split." People really liked doing the 50:50 split. And we found that 75% more money went to the highly effective charity if you offered that 50:50 split option. And then we thought, "Well, that's interesting. It could be useful." But if we just published a paper saying, "Hey, make split donations," people probably wouldn't just do that spontaneously. So we needed a way to advertise and promote this. We thought, "Well, the obvious thing is to offer to incentivize it, to add money on top of those donations." And we found, unsurprisingly, that when you add money on top, people like it even more. But then there's the question of where does that money come from? And our next sort of little idea was, "Well, what if people might be willing to pay it forward?" That is, someone who's taken one of these donation matches for a split donation that they decided to make. And you said, "Hey, if you take the money that you're just about to give to this highly effective charity that you hadn't heard of 10 minutes ago, what if you put that in a matching fund that will allow other people in the future to make a split donation and get a match on top the way you just did?" And we found that about 30% of people were willing to do that, which was enough to cover the costs for the people who chose not to do that. And we looked at this data, and we were like, "It looks like this could be a self-sustaining thing, where people like making these split donations." And I can get a little bit into the psychology of why they're so appealing. And it looked like there were enough people who were willing to cover the costs to kind of create a virtuous cycle where some people pay for matching funds and other people take matching funds. And everybody's happily making these donations that go partly to people's personal favorite charities, and partly to the highly effective charities on our list. So we created this website called Giving Multiplier that implements this. And we thought, "All right. Well, maybe this will raise $20,000-$30,000." And to our delight and amazement, it's raised $2.5 million over the last three years. We launched it around this time in 2020. And about $1.5 million of that has gone to the highly effective charities recommended by research organizations like GiveWell, Founders Pledge, and Animal Charity Evaluators. So I can say a little bit more about psychology and the charities and stuff like that, but I'm curious to hear your thoughts about this, Spencer.

SPENCER: I think it's really fantastic. I mean, it's such a clever idea. So just to summarize for the audience, basically, what you're doing sounds like: you're showing people a highly effective charity and the charity of their choice. And you're saying, "Hey, instead of giving all the money to your charity of choice, do you want to split it? And if you split, it will actually give you a match on top. So it gives you an incentive to split it." And not only do people want to split it anyway, because they somehow like diversification, but then giving them the increasing match on top based on the split actually makes them do even more. Is that correct?

JOSH G: Yeah, that's right. And I can add something to that. So part of it is that people just like splitting, that is diversification, as you call it. But what our research showed is that there's actually something a little special about these splits between a personal favorite charity and a highly effective charity. So what we found is that, if you give to a charity that you love – let's say, it's supporting the local animal shelter, or giving money to the school that you graduated from, or something like that — people are happy to give. But how much they give is not really so important to them. They just kind of want to scratch the itch to support that thing that they care about. And what that means is that whether you give $50 or $100, it feels about the same. But if you only give $50 to the charity that you picked yourself, then you've got this extra $50 that you can use to do something else. And it turns out that the something else of supporting a highly effective charity is really beautifully complimentary. That is, you kind of have the warm, positive feeling of supporting the charity that kind of comes directly from your heart. But you also feel like you're doing something really smart and effective by supporting a charity like the Against Malaria Foundation, which distributes malaria nets and can save someone's life for around $5,000, which by the way, may sound a lot because people hear things, "Oh, you can save a life for $100." But research shows it's not quite that easy. But at the same time, it really is doable for a lot of people with disposable income. That's a side note. So people like doing those highly effective things at the same time. And some of our experiments show that those bundles (as we call them) that have both the highly effective charity and the personal one, they're especially appealing. And when we ask people to evaluate people who choose to make split donations compared to someone who just gives to charity that kind of comes from the heart, or just gives to the highly effective charity, those people are seen as kind of generally awesome people. They're seen as being really warm, really altruistic. But they're also seen as being highly competent, as being really smart. And our thought is that that's how people see themselves, or you could say it's the kind of set of values that motivates people to make those split donations.

SPENCER: Now, how do you know how they're perceived? Do you actually run a study where you describe this behavior and ask people to rate the person doing it?

JOSH G: Yes. And in fact, the paper describing all of this research was published in Science Advances earlier this year. You can find it on my website or maybe we can make it available some other way for your listeners.

SPENCER: Yeah, we'll add that to the show notes. Let's talk about the psychology of this more. So if you were to give them a second charity, but wasn't pitched as effective, do you think that they would still want to diversify?

JOSH G: What we find is no. So in the experiments that I was gesturing towards, we do things like say, "Look, here's a charity that you chose, and here's a really popular charity." And what we find is that the favorite popular split doesn't have quite the same punch as the favorite effective split.

SPENCER: So they diversify a little bit but just not as much.

JOSH G: Yeah. People do like the splits but they especially like this kind of what you might call 'heart-head split.'

SPENCER: Do you think what's going on here is that there's sort of an emotional appeal for their favorite charity, like community charity or it's about a disease that affected someone personally. And then there's sort of an analytic appeal or a kind of theoretical appeal, and these are kind of really working on different systems in their mind?

JOSH G: Yeah. I think that that's exactly right. And it has parallels with a lot of research in judgment and decision-making in general. So, I'm thinking most famously of Kahneman-Tversky, and what Kahneman later adopted the terminology System 1 and System 2. I wouldn't say that it's exactly this impulse, and then a kind of corrective. It doesn't quite fit the same mold as a lot of the dual process kinds of stories. But certainly, I think that part of what's going on is, there's this emotional pull, as you said, towards that personal favorite charity. But then people are also thinking about the consequences and give a kind of more consequentialist response and think, "Gosh, but it's nice to do something that really has a big impact."

SPENCER: You might think that if people were purely rational agents, but altruistic rational agents, that giving away $2 would be twice as good to them as giving away $1, and so on. And I'm wondering, what do we know about how much better people actually see it giving away more money?

JOSH G: A perfectly linear curve would be giving $2 is twice as good as $1 and $4 is twice as good as $2. All the way up. We haven't precisely mathematically modeled it. But the curve grades off where what we find is that it rises fairly steeply up around till about giving 50%. And then after 50%, there's not that much change, that is 50% of the total amount that one is giving. And we see this kind of pattern. Years ago, I did studies using moral dilemmas where we asked people about killing one person to save 5 lives, or 10 lives, or 20 lives. And you see the same kind of diminishing returns, which is weird, in a way. Because why should the 100th life be any worth any less than the fifth life or the first life? But for almost any good that humans experience or consume, the more you have of it, the less impact it has. And I think our brains are designed to represent value that way. And in fact, there are at least within some subsystems, it seems designed to represent quantity that way in general.

SPENCER: Do you think this differs for analysis that's coming from kind of the more rational part of the mind or the System 2 versus the System 1 part? Because what you hinted at earlier is it may be for this sort of personal giving, or emotional giving, maybe it doesn't matter whether you get $50 or $100, but maybe for the more effective-oriented giving people actually care more about the amount.

JOSH G: Yeah, I think that's right. So what we show in our paper is that: we ask people, "How much do you feel like you did something effective?" after they're retrospectively rating their decision about how to give. And what we find is that people report feeling, or thinking, that by giving everything to, let's say, the highly effective charity, they would feel they were twice as effective as giving half. It's a little bit of a weird question because it's kind of a feeling about something that, in some sense, has an objective answer. But we see more of that linear relationship on how much are you sort of satisfied with the effectiveness of what you did or what you could have done. So there is an asymmetry there for sure.

SPENCER: Why do you think it is that people aren't that susceptible to just pure effectiveness arguments?

JOSH G: I think that this is not what humans are designed for, either biologically or for the most part culturally. The way that humans survive is by cooperating with people who have special relationships to them. And so, the earliest form of cooperation is with kin, people who share your genes. And that explains why we have such powerful feelings for the people with whom we are closely related to. And then sort of the next circle out, you have individual reciprocal altruism. Basically, friends, people who you do for them and they do for you, and you have a kind of ongoing mutual relationship. And then within larger communities, there can be a kind of indirect reciprocity either based on reputation: "I'm nice to people who are part of this community and I benefit from being part of this community." And at the largest levels that can be cooperative acts with strangers, even at the level of a nation, say, where the vast majority of the people are strangers, and yet, soldiers are willing to take risks in war for love of country. That may not be the only reason, but in some cases it is. But the idea of pure altruism towards everybody, towards anyone, no matter how distant they may be from you simply because they are suffering, that is a very new idea. And it pushes against the grain of basic human morality. So, yeah, it is an uphill battle. But at the same time, our big smart human brains can kind of think this through and say, "Well, in a way that makes sense. Why is the suffering of some child on the other side of the world, who we'll never meet, any less important than the suffering of people I care about, or even myself?"

SPENCER: If we think about game theory simulations, there's this classic simulation where you have a bunch of agents and some of them are doves, which essentially always cooperate with each other, and others are hawks. The hawks will always fight over resources. And you've got these kinds of classic scenarios where if it's all doves, the doves do well. If it's all hawks, it's not that great, because they're just fighting all the time. But if you have a lot of doves and a few hawks, the hawks kind of dominate because they just go beat up all the doves. And then as you do the simulations, you find that, "Okay, maybe there are some more advanced strategies to do even better. Like, you could have a Tit for Tat strategy. So as soon as we call them eagles, where what an eagle does is it basically acts like a dove unless it gets attacked, and then it will fight back. And so, if they're all eagles together, they'll cooperate with each other. But if you then insert some hawks, well, at least eagles will defend themselves against the hawks. So you have all these different strategies running. And I wonder if you think about this, trying to make it more realistic, think about real human societies. Obviously, these simulations are over simplified compared to human societies. But I wonder, is there a niche for a pure altruist, like a pure giving agent like the dove? Or do you think that that's actually just unrealistic, and in fact, doves would just get weeded out. You need at least hawkish or eagle behavior in order to survive.

JOSH G: I don't think there's any evolutionary process that can sustain pure impartial altruism in the long run. So evolution is fundamentally competitive. The things that continue to exist either individuals or the traits that they have, those things continue to exist, because the individuals who have those traits outcompeted, in some sense. It could be direct, confrontational competition, or just competition for resources or leaving more offspring. They outcompeted the people who didn't have those qualities. Evolution is fundamentally competitive, which means that at the highest level, the strategy is always to compete. Now, that doesn't mean that that's the case at lower levels, that we are part of teams. We are part of communities. Because as communities, we can outcompete others. So I actually think that pure altruism requires, in some sense, stepping outside of the logic of evolution (certainly biological evolution). Then you might say, "Well, how does one do that?" My favorite example of this is birth control. So from an evolutionary perspective, the invention of birth control pills would be the absolute worst thing that could happen to a species. These things that you swallow and they make you less likely to reproduce. So what happened there? What went wrong from a biological perspective? Well, to state some obvious things that are necessary for explaining this, is that humans don't just reproduce by thinking, "Hmm, it would be good for me to leave some offspring," or some people may have that thought. But mostly, people are motivated by feelings of sexual desire and love. And those things are often unrelated to the biological logic that would make one want to engage in procreative acts. And so, what we clever humans did, we got these big brains that enable us to solve lots of complicated problems existing in many different environments. One of the sort of most amazing features of humans is that we can survive in the Arctic, we can survive in the Amazon. Take any other creature and put them in such a wildly diverse range of environments, and they would be dead in most of them. But we humans use our big brains to adapt and solve problems, individually and culturally, that other species are forced to solve through biological evolution. So we got these big brains as a kind of general problem solving device. And then we kind of outsmarted evolution. We said, "Hey, we like sex. But we don't always want to have more children. So we create these pills or other devices that enable us to have the sex and the good things that come with that without producing more offspring." And that's a case where the evolutionary processes kind of short-circuited by using our big brains. And I see essentially a global moral philosophy where we care about distant strangers, maybe not as much as we care about ourselves and our closest associates, but at least enough to spread our good fortune globally. That's a kind of hack. It's the moral equivalent of birth control. Or in maybe more significant terms, it is taking the apparatus that we evolved for goodness within a tribe and figuring out how to make it grow. And I think that is the fundamental human project, the thing that we should most aspire to, but it is an uphill battle.

SPENCER: There are some people that take the idea that we evolved for certain purposes evolutionarily, like to spread our genes, and think, "Well, because we evolved that way, that should be our goal in a sense." And I've seen this coming from at least two different directions. I've seen it from a religious direction, a kind of Catholic natural law that says, "If we're made this way by God, then we should want to be this way." And they kind of analyze the natural world looking for ideas about the way things should be. On the other hand, I've seen this in a secular context as well. This idea that, "Okay, maybe you really should think of spreading your genes as part of the important thing is to be alive, because that's what evolution programmed us with." Personally, I think that's a big confusion. I think that they're looking at the way things are and saying that's the way they should be. I don't think there's any kind of logical justification for that. But I'm wondering about the psychology of that. What is known about why or how much these kinds of natural equals good ideas are present in us?

JOSH G: Well, understanding evolution is a recent phenomenon. So I don't think there's something deep in our psychology that's going to explain why humans make these kinds of evolutionary-based normative arguments that fall in the what is sometimes called the naturalistic fallacy category. That is assuming that what's natural is what's good. In my experience, when people make those kinds of arguments, it misses historically, as well as currently. They already have something that they want to justify. And then they want to say that it's on the side of nature. And coming from a religious historical tradition, where nature is thought to be the creation of God, and therefore, must be good, in some sense. Then saying that something's natural sort of has an automatic air of goodness to it. So if people in the 19th century wanted to justify a hierarchy among human groups, they'd say, "Well, this is what's natural. And this is what nature intended." Or something like that. I don't think there are too many people who think seriously about this, who buy those kinds of arguments. And if you think about them, you realize that it leads to absurd conclusions.The most fundamental dictum of evolutionary logic is to spread your genes. And that means that men at least should be banging down the doors of sperm donation facilities, trying to get their genes into as many wombs as possible. But the people who make these kinds of arguments are not lining up at sperm donation opportunities in order to make good on that. So I think it's usually a front for something else.

[promo]

SPENCER: I find the arguments about not finding pure altruism in evolutionary history fairly persuasive. But it does seem to contradict what I experienced in my ordinary life, which is that sometimes, not that often, I meet people that seem almost pure altruistic. They seem to have so much empathy, that they have empathy for essentially everyone and everything. And it makes me wonder whether there could be an ecological niche for pure altruists in a tribal world. If you had a pure altruist in the tribe, maybe that's actually a pretty good thing, because maybe that person is not going to be helping people around the world anyway, because they have no capability to do that. So in practice, they're going to be really, really beneficial to the tribe and not waste resources outside the tribe anyway.

JOSH G: Yeah, I think that may be right at a psychological level. That if you live in an environment where the only people you're in a position to be helpful or altruistic towards are people within your tribe, then a sort of universal goodness program could work. I'm a little skeptical of that, because human groups definitely rubbed up against each other in violent ways in our ancestral environment. So I think having people who if they saw a member of the enemy tribe on the other side of the hill, and they're like, "Here, have my lunch." I think that's probably unlikely to happen.

SPENCER: But maybe that prevents them from killing you also or makes allegiances between tribes.

JOSH G: Well, I think what that would suggest is that people would have a capacity to be conditional cooperatives. "I'll be nice to you, if you're nice to me," but not committed to a goodness blank check. But there is some research to suggest that something like this may be right in children. Felix Warneken is a developmental psychologist at the University of Michigan who studies prosocial behavior, and especially in children. He found something which maybe is not so surprising to people who've spent a lot of time around young kids, but something that is puzzling from an evolutionary point of view, is that young kids are very happy to help and be nice to complete strangers. So in the experiments that Felix did, he or another member of the research team would be, let's say, carrying a bunch of books, and they need to open up the doors to a chest to put the books inside and keep kind of banging into it, because he can't open it because both of his hands are occupied carrying the books. And the little kid in the room, who's maybe a year and a half and just barely able to walk, will go over and look at him and then open up the chest and enable him to put the books there. It's very adorable. But why would kids do that? And the thought is that young children may grow up in a sufficiently protected environment, that they can afford to be universally altruistic, which is not to say that children are always wonderfully behaved, but at least at times willing to be altruistic to people who they don't know at all, and could even be people who for all they know can harm them. So that would be a version of this idea that at least there's a developmental stage where being generous and helpful and altruistic in a way that could get you into trouble as an adult tribe member, but make sense as a kind of opening move for starting relationships and learning to make friends within your tribe.

SPENCER: Right. Because if mainly you're around people from your tribe, that's probably a pretty good instinct to just be helpful.

JOSH G: Yep, exactly.

SPENCER: If we think about these three different spheres: one is the self; the second is the community or your tribe, the people around you; and the third is sort of the world broadly, strangers you're never going to meet, that kind of thing. It seems to me just meeting people in the world that people differ a lot on how much they care about each of those three things relative to each other. And I'm wondering what's sort of known about this and about individual differences in this.

JOSH G: Yeah, there are big individual differences and really big cultural differences. I think for most cultures, for most of human history, there was very little incentive, certainly, to be generous. And anything other than skeptical, if not having some marked animosity towards outgroup members. And then cultures vary a lot in terms of how open people are to cooperating with strangers. And there's been some wonderful sort of classic experimental work, and then more recently historical work on this by Joe Henrich and colleagues. In their first wave of studies in the 2000s, they found that people from cultures where the people are more market-integrated — and their measure for this is what percentage of your calories are you collecting or growing or hunting yourself versus buying from somebody else — people who are more market-integrated are also people who belong to a larger religion or religion that's big enough where most of the people who are members of the religion you're not going to know, that it makes people more likely to be generous towards strangers, or at least in the case of religion, strangers under the same religious umbrella. And in a trade-off situation, less committed to the people who are closest to them. Many of your listeners will be familiar with the concept of weirdness from Henrich and colleagues, so Western, Educated, Industrialized, Rich and Democratic. In research in the last decade, they've tried to figure out how Europeans got so WEIRD. And the hypothesis is that during medieval European history, the Western Church broke up the clan network, because it was a kind of competing source of loyalty. And the reason for them doing it consciously or unconsciously, was to consolidate their power. But later, what this led to was a more individualized society, where people were better able to cooperate with strangers from other places, trading across towns and things like that. And this created the more individualized marketplace-oriented culture that has led at least many people in the West to adopt a more global perspective when it comes to ethics. Of course, this is not to say that by any means all Westerners are sort of ethically globally minded. But there's more of that than in traditional cultures that tend to be more closed off within the tribe in terms of where people put their value.

SPENCER: It's interesting how markets can facilitate this potentially. But it also seems to me that religion may have played a major role in this, going from little tribes that had their own worldviews to these large worldviews where you could travel 20 miles or 100 miles and meet someone who had similar belief systems to you. It seems to me that that would create a lot more cooperation across people that you've never met before.

JOSH G: Yes, absolutely. What you just summarized is a version of the story that Ara Norenzayan tells in his wonderful book, "Big Gods," where he makes exactly that kind of historical argument that there was a fundamental shift with the birth of big Gods, which is to say that monotheistic religions — so Judaism and Christianity — and ones where the big God takes an interest in human affairs. For some reason, the God of the universe cares about whether people are good cooperators with the people they're supposed to be, or whether or not they're being mean and nasty defectors. And an interesting manifestation of this is that across many world religions, you see a lot of iconography related to eyes. You see this in Buddhist tradition, Mayan tradition, Christian tradition, and many others. The idea that God is watching you to see if you're going to be a good tribe member, a good cooperator or not. And then, of course, God (at least in some traditions) then meets out the final reward or final punishment, depending on how good or bad a cooperator you were. And combining this with a kind of symbolic binding together. So if you're a Catholic, you go to a mass on the other side of the world, in a country where you don't speak the language, and the mass is completely familiar to you, and you can participate. It puts people part of the same network in the same trust network. That seems to be a very important historical development. And then Norenzayan argues that in especially WEIRD type of places — to some extent in North America, but even the larger extent in Northern Europe, Scandinavian countries like Denmark — you have secular society playing the same kind of guarantor of cooperation role that you have in more traditional religious culture. So that religion can fade in places where you can trust the police and the courts and so on to make sure that people behave reasonably well towards each other. And you can imagine that when those things start to break down, or when people perceive them as breaking down, people may go back to more religious or otherwise informal mechanisms for seeking stability in society and securing cooperation.

SPENCER: Another factor I wonder about with regard to altruism is the extent to which an environment is zero-sum kind of competitive environment where if one person gains another loses, or a scarcity environment where there's not really enough to go around, compared to an environment that's positive-sum, where there's a lot of beneficial cooperation to be had, and post-scarcity, where there's a lot of abundance. It seems to me in the latter kinds of worlds, it's way more possible to think about everyone universally. Whereas, if you feel you have to compete with all those around you just to get enough to survive, it seems very natural that that would close you off and make you think very locally in terms of yourself and maybe your closest family members.

JOSH G: Yep. I think that's exactly right. And this is a place where economic growth can be a great boon to cooperation, if the benefits of the growth are shared equitably. So for example: for much of the history of the United States, since the mid 19 century especially, the US really reaped the benefits of the industrial revolution that made great use of the new invention of first steam power, and then electricity and mechanization for producing things in factories and for transportation. And the United States just grew and grew and grew. And for much of the 20th century, American democracy was very strong. And the benefits of economic growth were widely shared with the very important exception that they were not widely shared racially. But depending on which measures you use in the late 70s and 80s, those growth curves started coming apart. That is, the GDP, the size of the pie kept getting bigger and bigger and bigger as it had throughout the 20th century. But a typical person's slice of the pie in real dollars leveled off, and for many people, especially people without college degrees, that slice got smaller. And I think a lot of the animosity that we are seeing in the United States today has many sources. I don't deny an important role for social media and things like that. But one of the most fundamental drivers may be the sense that yes, the pie is growing, but most people's slices are not. And that creates a kind of scarcity mindset that can easily devolve into an us versus them among people who feel that we, that my group, is getting short shrift.

SPENCER: I know you've done some really interesting work on how we can bridge the divide between political groups and reduce political animosity. Do you want to tell us about some of that?

JOSH G: Yeah, sure. So this is work that we've been doing for several years. And this was led by Evan DeFilippis. Most of this was his doctoral dissertation research. The work is not published. Our article is currently under review. So I can't talk about the results in too much detail. But I can explain the general strategy. So building on many of the things that you've said: what is the best way to get people, who are members of groups that are at odds with each other, to have less negative attitudes toward each other, even more positive attitudes? Both evolutionary theory of the kind that you are pointing towards, and a lot of research in social science, indicates that the way to do this is essentially to put people on the same team. Not just to bring them together in any way, but to have people from opposite sides of whatever the divide is — whether it's racial, religious, or political, and those things are not necessarily orthogonal — be on the same team, work together in an interdependent way, have that team succeeded, and have the benefits of that success shared. And so, Evan and I were thinking, starting with the United States where we both live, what would it take to create cooperation between Republicans and Democrats in a way that's meaningful, that involves people's identities, so that if you have a good cooperative experience, it kind of can extend to the group and not just to the person you're cooperating with. And where it could be scalable by being digital, and by being something that people might enjoy doing. And so, what we came up with is a quiz game where Republicans and Democrats play as partners. So we have them answer questions where they're designed to promote a kind of interdependence. So a lot of research to figure out what kinds of things do Republicans know about that Democrats don't first that's not about politics. So if you ask people, "What's the name of the family on the show Duck Dynasty?" Republicans are more likely to know the answer; very few Democrats do. But if you ask about a show, like Stranger Things or the Queen's Gambit, Democrats are more likely to know the answer. And then the quiz game can get into more political territory. And I should say the way these games work, before I talk about the political questions, is people are paired up online, and they're connected by chat, and they have to chat with each other to agree on answers. They only get their points and only get their money if they both submit the right answer. So if they both said Duck Dynasty, because the Republican knew the answer, then they both get the points and the money after chatting about that. More political sorts of questions: if you ask what percentage of gun deaths in the US involve assault style weapons, Democrats are likely to say that it's like 30% or 50%, whereas Republicans are likely to say that it's like 1% or 2%. And in that case, the Republicans are more accurate. But if you ask about rates of crime among immigrants, Democrats will say that it's very low, Republicans are more likely to say that it's high. And in that case, Democrats are more likely to be right. So we have these questions where both sides get to be right, and both sides get to be wrong. And people have this experience of working together and learning from each other, and figuring out the right answers in order to do well on the quiz. And we have run a series of randomized control trials with this, where the control condition would be instead of a Republican and a Democrat playing together, you have two Democrats, you have two Republicans. So it's the same game. But what changes is whether you're playing with someone who's from your political in-group versus out-group. And the results have been very promising across multiple RCTs. We see positive effects lasting longer than we would have initially expected going out weeks, and in some cases, months. So that's the basic research. And then I've spent a lot of the last year working with folks at a group called the Global Development Incubator, which is a social impact tech incubator, on trying to bring this out into the world. So we're just beginning that process and doing some pilots. But that's our strategy. And the hope is that this could be something that could reach thousands, maybe even millions of people and shift people's attitudes on a large scale. I think of it as being kind of the opposite of a troll farm or divisive content online. Just as people have been enormously successful at spreading mistrust and disrespect online at scale at very low cost, what we hope is that this can be something that would be the opposite of that. But to begin, we're starting in more controlled institutional contexts. And then, maybe at some point, we'll have a version of this that will be out there in the wild. So that's what we're up to.

SPENCER: What's the outcome measure you use to see how well it's working?

JOSH G: Well, I can't get into too much detail about the stuff that is unpublished, but it's standard measures of people's animosity towards the other side. So often what's used is a feeling thermometer, "How warm or cold do you feel towards the other side?" And things like, "How would you divide $100, let's say, between a random Republican and a random Democrat?" And then we also ask some questions related to the democratic process, and what kinds of attitudes people have towards that and towards compromise, and what kinds of red lines they would draw in the democratic process.

SPENCER: People often talk about polarization increasing. And I'm wondering, to what extent do you agree with that? And also, how do we know that?

JOSH G: Well, people have been measuring what's called the effect of polarization now in the literature, reliably since the late 70s. So people have been using those feeling thermometer questions since the late 70s. And there has been a steady increase from the late 70s and up to the present. And I think it's something like a 20 degree increase in the gap between how people feel about their own party versus the other party, which is a lot.

SPENCER: When you think about the game you designed to help reduce polarization, what do you think the really essential ingredients are? Would it work if a Republican and Democrat were cooperating in any game? Or is there something about the game that has to be special?

JOSH G: Well, I can't get into too much detail here. But it does seem that having things get political adds a bit to it as well. But I also think it is possible for people to cooperate on something that doesn't have that political dimension and have that have positive effects as well. And there's other research that suggests that there's been some lovely studies looking at people playing soccer. This is Christians and Muslims in post-ISIS Northern Iraq. This is work done by Salma Mousa, things like that.

SPENCER: Do they have to be on the same team? Or if they're competing against each other, does that actually still work?

JOSH G: I think they have to be on the same team. I don't know if they've done a control where there's an opposite team or if it's just that it's only been shown to work in that way. But that would make sense to me. I think I wouldn't completely rule out that competition, at least in some contexts, can work. But I think it would work not because of the competition, but because of the cooperative frame around the competition. Even in a sporting match, where there's 'my team versus your team,' and a point for me is good, a point for you is bad, it's a zero-sum relationship within the game, but the arrangement of the sporting context and everybody agreeing to play by the same rules, and everybody agreeing to abide by the calls of the referee, and so on and so forth, that is, in some sense, a broader, more significantly cooperative enterprise. And so, I think in any kind of controlled competition, not an all out war, there is both a cooperative element and a competitive element. And I think that if the cooperative elements are sufficiently strong, then I can imagine that even something that is competitive can have the same effect. But that's a hypothesis.

SPENCER: It seems like there are a few different potential ways that this could operate. One is it could just be an exposure effect, maybe any time spent with the other side doing anything is helpful because it kind of makes you think of them as an individual instead of an abstraction, and your caricatures of the other side show that it could be not true. Another factor could be, it makes you like them more. Playing a game with someone can make you like them. Or maybe there's a respect factor, "Oh, they knew the answers to some things I didn't know." And maybe that's helpful. Or maybe there's a reciprocation thing, "Oh, they helped me in the game, so that makes me feel I should reciprocate it to them or their group." So, I'm wondering how you would break down these different pieces of what's actually happening.

JOSH G: Well, there's a long literature on what people call 'contact', bringing groups together. And what the literature suggests is it depends in terms of whether or not the contact goes well. And the kinds of things you suggested — when people come together in a more cooperative context, and where things are fairly well-controlled — that's when things are more likely to go well. But in other cases, it can go badly. If you have people attempt to have a dialogue, and it's completely unstructured, and you just have a void, where people's partisan narratives can just rush in and fill it, then those sorts of things tend not to go well. There have been some classic cases where exposure to the other side, either digitally or in person, doesn't work very well. So Ryan Enos in Harvard's Government Department has a classic experiment where he hired Spanish language speakers to have Spanish conversations on commuter trains in Boston. And he randomized this across different commuter lines. And then he surveyed the people who were exposed to the Spanish language conversations or not. And what he found was that the people who just overheard people speaking in Spanish, they reported having more exclusionary attitudes related to things like immigration or making English the sole official language of Massachusetts (or of the United States; I don't remember the exact DVs). But having that kind of exposure made people more negative. Another case, this one digital, Christopher Bail and colleagues did a study where they paid people to follow a twitter bot that would retweet tweets that came from the other side politically. This is US Republicans and Democrats or Conservatives and Liberals. And what they found is that this had no positive effects. And in some cases, it actually had a backfire effect. I think in the case of Republicans hearing what the liberals were saying, at least on Twitter, if anything hardened their partisanship. So I don't think it's just mere exposure, but the right kind of exposure. And the things that you were pointing to, I think are the things that do best, where you can see that the other person is smart, competent, worthy of your respect, and they're also worthy of your trust, and that they value your contributions while having contributions of their own. And that's consistent with our experience.

SPENCER: How do you message it to the players in the game? Are you telling them that you're setting them up with someone who has opposite political views?

JOSH G: Yeah, in that case, it's very explicit. Some of the things that we're planning probably will be less explicit in terms of people coming in and saying, "I'm a Republican," or "I'm a Democrat." But instead, people can have a more intuitive sense for the differences that they might be coming in with culturally or politically. But in the experiments we've run so far, we actually have people do a little get-to-know-you session, where they take a quiz on the other person's information, and we can then document that they know, not only what the other person's favorite fruit is, but also what their politics is.

SPENCER: What do you think about getting people motivated to do something like this? Because it seems to me, there are some people that are already pretty motivated to understand the other side. But many people don't feel motivated to understand the other side. And if anything, they might even kind of feel negatively about the idea of bridging the gap.

JOSH G: That is, I think, one of the great selling points of the kind of work that we're doing. So in the last five years, there's been a lot of interest in political bridge building. And a lot of the organizations that have sprung up have, in some way or the other, been about dialogue, and bringing people together across the dinner table, or in a kind of comfortable meeting context to work out their differences, or at least get to know each other first, and then try to do that. And I think that there are two significant limitations of that approach, which is not to say that I'm by any means opposed to it. I think that work is important. But the people who are likely to participate in a kind of dialogue session are already people who are at least fairly open minded. So there's a kind of self-selection issue there. And then also, in-person dialogue is much harder to scale. And so, part of what we like about the quiz game is that you don't need to have an interest in cross-partisan bridge-building to play. The pitch can just be: "Hey, play a game, win some money." And so, I think we can reach a wider range of people. And then because it's digital, it is more scalable. And people also seem to enjoy the game a lot. We'd collect data on this and also get people's comments. And having something that people like to do, and it can be a game or an experience that's renewable, because we can always change up the questions. So, you can do a quiz many different times with different questions. So that's, I think, one of the strengths of the strategy.

SPENCER: What would you say to someone who says, "Well, but why are we trying to bridge the divide? My side is actually fighting for what's good. And the other side is making things worse." Take, for example, someone who thinks Trump is a terrible leader, and the other side is trying to get Trump reelected. So yeah, I'm just curious how you think about responding to that kind of critique?

JOSH G: Yeah. Well, this is a great question. And I'd say the goal here is not to get people to agree. In fact, we don't see a lot of people changing their minds on issues, and certainly not relinquishing their partisan identity. I think that what this effort does is it turns down the animosity, and cranks up respect and a sense of trust, even if it's a minimal kind of trust. And I see that as a precondition for having a healthy democracy. So there's nothing wrong with people disagreeing and having a vigorous battle of ideas. I mean, that is, in a very real important sense, how a democracy is supposed to function. But it can't function if there's so much distrust and so much disrespect, that you can't accept the idea of the other side holding power. "They're so bad, they're so evil, they are so insistent on destroying this country, that we can't let them occupy the Oval Office or have full access to the vote." That is what's so dangerous about the moment that we're living in. So, what I see this doing is not trying to make everybody agree or get people to give up their moral or political beliefs. But to get people to the point where they can have at least a baseline level of respect and trust for people on the other side. And that's, by the way, not necessarily the politicians on the other side, but the ordinary people who are nevertheless supporting the kinds of actions that can really erode a democracy.

SPENCER: I think one argument that's potentially compelling for reducing tribalism is that there's a lot of things that we can improve about society that we actually could have the left and right agree on. There are things that just don't work very well, or there's inefficiencies or there's corruption. Or there's situations where everyone knows when something needs to be done. And it seems that hating the other side and distrusting the other side actually is an impediment to fixing the things that we kind of all should be able to work together to fix. And if we could work together to fix them, then they could probably get fixed a lot faster. Whereas, if instead the two sides are just battling it out, or trying to pull power from each other, then these things may not get fixed.

JOSH G: Yep, that's right. To second what you said, that we should be working to solve the problems that we can solve. And there have been some areas of progress mostly that flies under the radar. Certain advances in criminal justice reform and things like that. But to some extent, our current climate is driven by what you might call conflict entrepreneurs, that is people who don't want things to work and who thrive on division. And their pitch for power is that the other side is evil, and they are out to destroy you, and only I can protect you from them. So in a well functioning democracy, people with competing ideologies might fight where they have to fight but work together where there's some agreement and common ground. But when the fight becomes not so much about policy views, but about us-versus–them, about identity, about we're the good people who love America, and they are the terrible people who are trying to destroy it, then compromise even where it's possible, that comes something that's viewed as dangerous and yielding to the dark side.

SPENCER: The Israel-Palestine conflict is on everyone's mind these days. And I'm wondering, do you think that this kind of intervention could work in that kind of arena where the conflict level is just so high and animosity level is so high? Or do you think that different types of interventions would be needed?

JOSH G: I visited Israel in June with these very questions on my mind, and I had the thought that you just expressed the sort of skeptical thought that in the midst of a violent conflict, the idea of, 'Hey, let's play a game' doesn't make a lot of sense. Certainly at the moment, we're talking in November 2023, about a month after October 7 when Hamas attacked people in southern Israel (attacked being an understatement). I was pleasantly surprised at how receptive people were to the idea of the quiz game as a useful tool, not for cooperation across the border with the West Bank or with Gaza, but for relations among Arabs/Palestinian-Israelis and Jewish-Israelis within Israel, and began talking with researchers and practitioners about that, and at the moment that's on hold but I don't give up on trying to come back to that at some point. This is working towards the long-term health of the relationship between the tribes there. And right now, we're kind of in an emergency situation. So I think, yeah, that'll have to wait. But someday I hope to come back.

SPENCER: Obviously, with a conflict like that, there's this really long history and it's incredibly complex. But just thinking about the psychology of the conflict, what do you think our understanding of psychology can do to understand issues like this one?

JOSH G: Well, I think that it can help us focus on the things that are most likely to work. I think often, intuitively, what we want to do with people who we vehemently disagree with is tell them very loudly and clearly why they're wrong. And that does very little to persuade people. If anything, it causes people to dig in. Research in psychology and other fields indicates that that strategy doesn't work very well, however emotionally appealing it may be to people in the thick of intergroup conflict. And it can focus our attention on things that are more likely to work. So I gave an example of our work on intergroup cooperation. My colleague, Mina Cikara, Samantha Moore-Berg, and others working with a group supported by a wonderful organization called Beyond Conflict, led by Tim Phillips, have done work on what's called outgroup meta-perceptions. Outgroup perceptions and outgroup meta-perceptions, that is, what do you think they think of you? And Jeff Lees and Mina Cikara recently conducted a now already very influential experiment, where they expose people to more accurate information about what the other side thinks of them. People think the other side doesn't have positive feelings about them, and they're often right. But those negative perceptions are often exaggerated. And that's also true for policy positions. And what they've shown is that by just telling people, "Hey, they don't hate you as much as you think they do." That can soften people's animosity as well. And it fits in with this broader idea that cooperation is really at the heart of it, because a precondition for being willing to cooperate with somebody is that you think that they don't hate you. So things like that. I think we're learning more and more about what works and what doesn't. And what works is often surprising and not the first sort of tool that people want to grab for.

[promo]

SPENCER: How does your work on moral emotions relate to something like the Israel-Palestine conflict?

JOSH G: I think it's human conflict everywhere. And what I've argued based on my research and a lot of other people's research is that basic morality is about cooperation within the tribe. And the way that it works psychologically is largely emotional. So we have all of these different social emotions that might feel a kind of hodgepodge, where we have empathy and resentment and guilt and shame and a tendency to gossip and things like that. What do these things have to do with each other? You can think of them as falling into categories of either emotional carrots-and-sticks, that either motivate ourselves or motivate other people to be more cooperative. For example, if you're someone I really care about, and I empathize with your pain, that motivates me to help you. And if I'm not a good helper, then I might feel guilty. That's the emotional stick. And that can motivate me to be more helpful next time. And we have negative feelings, like resentment towards other people, or positive feelings like gratitude. But all of those feelings are structured around cultural expectations, and around norms about who's in and who's out. And then at the meta level, we have different tribes, different groups with competing interests and competing values. And the way we naturally relate to humans is with those moral emotions. But the problem is if you have two different groups that have different intuitions about abortion or whether or not women should be able to work outside the home, things like that, you can't resolve those conflicts by appealing to intuition, because it's the competing intuitions that are creating that. So I call this just as at the tribal level, it's classically called the tragedy of the commons. Going back to Garrett Hardin's classic story about the herders that need to cooperate to preserve the pastures, I call this higher level problem in my book, Moral Tribes, the tragedy of common sense morality, where we have groups that are driven by their emotions about what's right and what's wrong and who you can trust and who you can't, and who has the right to this holy land and who doesn't — to take the case of what's happening in the Middle East — and those conflicts cannot be resolved by appealing to our emotions, because that's what's driving the conflict in the first place. In my book, what I argue is that we need a better moral philosophy. And I argue for a philosophy that I call deep pragmatism, which is really a kind of more psychologically informed version of consequentialism or utilitarianism. But I think if you call it that, especially utilitarianism people get the absolute wrong idea. So we'll just say deep pragmatism. And more recently in my work, I've thought, "Well, it's pretty hard to persuade people to change their moral philosophy, although I think some people have." I've been focused more on how we create experiences that change people's attitudes and change their higher order moral beliefs, and that's what Giving Multiplier and the work with the cooperative quiz game are about.

SPENCER: When you say, "a psychologically informed version of utilitarianism," what does that mean? How does that differ from a kind of more pure philosophical utilitarianism?

JOSH G: Well, strictly speaking, it actually doesn't. That is, if you're really true to your utilitarian first principles which say, maximize the greater good, you're going to want to do things like take into account the limitations of human psychology. So humans might have a tendency to think, "Oh, this thing that I happen to want, and that happens to serve my personal interest, is really for the greater good." Or you might think that you're better at calculating which actions are going to produce the greater good than you actually are. And so common sense morality has a lot of guardrails that prevent people from being a little too clever. So we have basic rules that say, "Don't lie, don't steal, don't cheat." And if you're a kind of psychologically naive utilitarian, you might say, "Well, I'm just going to do what I think is going to promote the greater good." And if that means lying, if that means cheating, and stealing, then I'll go ahead and do it, if I think that it's all for the greater good. Whereas a more sophisticated utilitarian will say, "I am not in a position to confidently know the future, or to think to be aware of when I'm acting in a self-serving unbiased way, so I better stick to the good old rules of basic common sense and follow the basic rules about lying and cheating and stealing and things like that." So that would be a recent very salient example of how not to be a true utilitarian, and certainly how not to be what I call a deep pragmatist.

SPENCER: Is the idea kind of like, with rational thinking where, in theory, you could do rational thinking by calculating Bayes rule all the time, but in practice, that doesn't actually work, because you can't do the calculations. So instead, you want to use other techniques to try to be more rational with our heuristic techniques. And similarly, what you're arguing is that if you want to do the best job of being utilitarian in the kind of classic utilitarian sense, you actually should be a real utilitarian where you're actually following all these rules instead.

JOSH G: But some of it is rules and some of it is just a kind of more pervasive sort of humility. And some of it also is sort of forgiving yourself. So one of the classic challenges to utilitarianism is the over demandingness objection which is that, "Well, shouldn't you be giving all of your money away to buy malaria nets, or whatever it is, to the point where you're just barely have a higher standard of living than the people you're helping, or you have just enough to continue earning enough income to keep giving?" And I think that for most people, that's not really realistic. You can't just make yourself do that. Instead, you can try. But if you try, you're probably going to fail, and you're probably going to be miserable, and you're probably going to give up the whole project. And in the course of using all of your resources for the greater good in this kind of more global sense, it'd be very hard to have friends and very hard to have relationships. Are you going to not have a birthday party for your kid or not give your kid a birthday present because the money can be better spent on the other side of the world? So when you factor in all of the things that we need to sustain us as humans, and humans living in a cultural context, I think, what you end up doing is something more like what Charlie Bresler of The Life You Can Save calls the personal best. For example with dieting, you could have a physiologically optimal diet, but you'd have a very hard time sticking to that diet, and then you abandon the diet, and you'll end up being worse off. Whereas if you have a reasonable diet, that allows you to have some sweets and some treats, and you can eat some of your favorite foods, but you know it's good because you can stick to it, then that might actually be optimal. So that's another way where attending to the realistic contours of human psychology can make you a much more effective real-world person and not the kind of caricature of a utilitarian, like the happiness pump guy in The Good Place for your viewers who have seen that episode. This is someone who, in a sort of cartoonish way, has devoted his whole life to trying to do as much good in the world.

SPENCER: One thing about advocating a form of utilitarianism is that it boils everything down to one unit: utility. I'm wondering why approach it that way, rather than saying there are multiple things of value. Some people are going to value justice. Some people are gonna value equality, and some people are gonna value happiness, and some people are gonna value longevity and so on. Why kind of push it all into one thing?

JOSH G: Well, in a sense, it's not pushing it all into one thing. So, valuing people's health and valuing art and valuing knowledge and valuing all these things, you can still value all those things. But the reason why you have to put it on a common metric, at least sometimes, is because decisions need to be made. So even if you think that in some sense, there's a variety of incommensurable human values, when you have to make a decision, you have to put those things on a common dimension. Now, that doesn't mean that you're doing it explicitly — and I'm thinking about these things in terms of how many dollars or how many utiles — when you intuitively make a decision where you are trading-off one value versus another, there is some unseen point system operating your ventromedial prefrontal cortex that is putting those values on a common metric and coming out with an answer. So I think it's the reality of choice that forces us to put things on a common system.

SPENCER: I think there are three different related concepts here that can easily be stuck together, but that are actually different. One is utility in the sense of happiness minus suffering, in the classic Bentham sense. And utilitarians often want to put things in a unit like that. Everything boils down to happiness minus suffering. A second is utility in the economic sense, where the idea is that which can be modeled as having utility function, where it's some function that theoretically describes what choices we would make, if we had to compare every single option. And it's debated to what extent this actually fits real humans. There are many ways I think that doesn't fit, but at least it's a model of humans. And then finally, there's the idea that in our brains, the decision has to be made at the end of the day. So there's some kind of hidden function that describes each of our brains and what we actually choose in real world scenarios, sort of a psychological truth. And I'm wondering, which of those are you really talking about here? Because if you're talking about the utilitarianism version of happiness minus suffering, I don't think everything has to be put into that unit at the end of the day. If you're talking about us actually making a choice as a psychological being, yeah, I guess in some sense, there's some kind of function that describes what choice we end up making. But that may be different than the choice that we feel that we should make or that you would want to make on reflection.

JOSH G: Right. So what I'm saying is à la model three, whenever we make a choice between two things, we are in some sense, putting them on a common metric, even if it's implicit. My point in highlighting that is just that there's a sense in which having that common currency is inevitable, even if we don't acknowledge it. But the question as a scientifically informed philosopher that I'm trying to answer is, "Okay, if we're not just going to leave it up to some kind of informal weighing of values in our heads, if we want to actually have a principled answer to the normative question, what should we do?" Then you need to have a system that puts all values in principle on a common currency so that they can be compared. And that is what consequentialism and specifically utilitarianism do. Now, that doesn't mean that they're right. You could say, "Well, that is a kind of useful feature of the philosophy and that it is systematic in that way, that for any practical question, given enough information about the consequences, it will give you an answer. But that doesn't mean that it's the right answer." And I think you need to separately provide a justification for that. But in terms of which am I talking about, I'm talking about the first thing that is a normative standard for making decisions involving moral trade-offs. In light of the reality that we always have some standards that's operative when we make decisions involving value trade-offs, we just aren't necessarily conscious of what that standard is.

SPENCER: I think about this topic a little bit differently than you. I think of utilitarianism as making a choice that there's only one thing of value, or maybe two things if you call happiness and suffering different things, but then they take happiness minus suffering. So that essentially assigned zero to all the other values.

JOSH G: That's not how I think of utilitarianism. And I don't think that's how any of the major proponents of utilitarianism think of it. So the idea is not happiness and suffering instead of everything else. But rather, that all or nearly all of the things we value ultimately bottom out in terms of happiness and suffering, or maybe to put it the way Sidgwick would rather than Bentham, in terms of positive versus negative states of consciousness. So for example, I love my job, I value my work. And someone might say to me or someone else who works, "Well, why do you go to work?" And you say, "Well, I like my job, and I also need to make money." "Well, why do you need to make money?" "Well, I need to pay the rent." "Well, why do you need to pay the rent?" "Well, I don't want to have to live outside." "Well, what's wrong with living outside?" "Well, it gets cold at night." "Well, what's wrong with being cold?" "Well, it's painful." "Well, what's wrong with pain?" It's just bad, right? The idea is that if you keep asking why do you care about that, why do you care about that, with anything that we value, or nearly all things, is ultimately going to bottom out in terms of the quality of the conscious experience of some sentient being. So the idea is not its happiness and suffering instead of all of the things that we actually think of as our values most of the time. The idea is that happiness and suffering are the values behind our values.

SPENCER: So happiness and suffering are a component of many things we care about. They're instrumental components. But if you think in terms of intrinsic values, if you think of things that you value for their own sake, happiness and suffering are just one or maybe two of those things. So for example, if you take equality, and you say, "Well, which world would you prefer: a world where everyone gets one utility or one person has ten, and the other nine people have zero?" Most people would prefer the equal world. The total amount of utility is the same, but most people still have a preference for an equal world. And I think that's because they have an intrinsic value of equality that doesn't boil down to utility. It's an independent intrinsic value, in my view. So what I would say is that things don't all bottom out in the quality of positive or negative experiences of conscious beings, that they also bottom out in other things, like people wanting to be alive independently of wanting to have happiness and lack of suffering. And people want to be remembered after they're dead, independent of the fact that that may not impact their happiness and suffering while they're alive and so on.

JOSH G: So I'm certainly not claiming that all people are utilitarians, far from it. But I think that there's a lot more happiness and suffering behind things like equality than we might think. So I actually did some research on this years ago with Jonathan Baron, who's sort of a legend of judgment and decision making. This started when we were early in grad school. Normally, when we think about distributions of goods, we think about things, either material goods like cars and houses and medicine, or at least money. But money has greatly diminishing marginal returns. That is, $1,000 to someone who's very poor means a lot more in terms of the potential for increased happiness than $1,000 does to someone who already has a million dollars in the bank. So as economists like John Harsanyi noted long ago, you can explain a preference for egalitarianism in purely utilitarian terms. Because when you transfer wealth or resources from those who have more to those who have less, with rare exceptions, it's going to increase the overall level of wellbeing. Now, when people say that they value equality, people are generally not talking about utility, which is a weird abstraction that people either never think about or sort of awkwardly think about. In fact, the work that I did with John Baron showed that when people try to think about utility, they can't help but think about it as if it's money instead of utility in ways that end up being inconsistent. That's a longer complicated story. The main point here is that you can be very, very egalitarian on purely utilitarian grounds. And in fact, Peter Singer, the most well-known living utilitarian philosopher, has spent much of his life arguing that we should be striving for a more equal world where the people who are least well-off can enjoy the benefits that people in more affluent societies enjoy. And I think you can make these arguments about a lot of these cases where it seems like people's values are at odds with utilitarianism, but when you really understand what's going on, there are not so much.

SPENCER: I absolutely agree that a significant part of why people care about equality is because in the real world tends to produce more happiness or a more direct happiness and reduction of suffering benefits. I just think that people care about above and beyond that. I do wonder, do you fundamentally think if we had a complete understanding of the brain that on some deep level, what people care about, their values would all come down to happiness and suffering? Or do you think that we would actually find that you need other variables to model it, you need separate variables for things like equality?

JOSH G: I certainly do not think that people only care about happiness and suffering as such. Certainly when it comes to people's sort of self-oriented decision making, that's a pretty good first step. But humans are intensely social, and they care about other people. But they don't care about other people equally. And this comes back to our earlier discussion about the expanding circle from kin to people we know personally to people under the same cultural umbrella to strangers to members of other species. So, for people to be at least perfectly utilitarian, they would have to value wellbeing impartially which they don't. And maybe that's not what you had in mind. But we also know that people absolutely do not make decisions that promote the greater good. I spent a lot of my early career studying moral dilemmas and sort of trolley cases that will be familiar to some of your listeners. And the phenomenon that I was really starting with is: when do people make more utilitarian decisions, and when do they not, and why? So in the trolley world, the classic contrast is between the switch case where the trolley is headed towards five people, but you can turn it on to a sidetrack where it would only kill one person. And most people say that that's okay. Whereas you contrast that with the footbridge case where the trolley is headed towards five people and you're on a footbridge in between the oncoming trolley and the five and you're standing next to a big person or we'll say a person with a very big backpack. And the only way to save the five — we will stipulate as we will suspend disbelief — is to push the person with the big backpack off of the trolley and onto the tracks. And that person will stop the trolley and save the five. Most people say that that's wrong. And in fact, last year, there was a paper that came out in Nature Human Behavior, replicating one of the studies from my group from 2009, showing that across many cultures and many continents, you see the same kinds of effects and you see them sensitive to similar factors. That is the factor that explains why people say that it's wrong to push the guy off the footbridge, but okay to hit the switch. So, long story short to say that there's a lot to unpack here. And absolutely, people are not straightforward utilitarians. What I meant earlier was that if you look at people's values, and you look at the things in a broader, more abstract sense that they tend to value, those things are intimately related with the quality of people's experience. But at the level of decision-making in context, absolutely not, people's judgments are all over the place.

SPENCER: So before we wrap up, how can people use Giving Multiplier either for themselves or if they want to share with their loved ones? I imagine it's kind of almost like a free win because by using it, they just get some of their donations matched right?

JOSH G: Yep, that's right. And we have created a special code for your listeners. So this is givingmultiplier.org/clearerthinking. And if you put in that code when you go to the site, then you will get a higher matching rate. So at the moment, the matching rate with the code is set much higher than we ever thought we would get to. We add 50% on top if people make a split donation. And if people decide to support one of our highly effective charities 100%, then they get a one to one match. And we want to reserve the sort of the use of the matching funds for people who are new to effective giving. So people who are thinking, "I like this idea of supporting the charities that I usually support and that means something to me personally. But I also like this idea of supporting these highly effective charities that do great work promoting global health or eliminating global poverty, etc." So the code is for you. If you're kind of new to this. If you are already committed to supporting highly effective charities, then from our point of view, the best thing you can do is to directly support our matching fund. And you can do that on the website as well. And when you do that, then you enable us to spread the circle of effective giving even farther. And you'll see at the top of the Giving Multiplier site, there's a little link that says Fund Us, and that is not for infrastructure, actually, but for supporting the matching fund to supply funds that encourage other people to give more effectively.

SPENCER: It's so rare that there's a situation where you're like, "yeah, if you just do it this way, instead of that way, it's just you get this free bonus." That's pretty cool.

JOSH G: Yeah, and this all comes back to the same theme of cooperation. With the quiz game and with this, instead of saying, "Hey, do this. This is the right thing to do." And sort of eat your — I used to say brussel sprouts, but now brussel sprouts have gotten so much better. I'll have to say — limp boiled broccoli or something like that, instead we try to make an attractive offer. So with the quiz game it's, "Hey, come, play a fun quiz game. Show how smart you are and win some money." And with this, we want to make this an appealing thing, where you can still support the causes that are closest to you but you can also try this new thing and get some money added on top for the good that you're doing.

SPENCER: Josh, thanks so much for coming on.

JOSH G: Thanks so much for having me. Delighted to be here and really appreciate all the great work you do with the podcast and other things.

[outro]

JOSH C: A listener asks: "What's the best way to get rich these days?"

SPENCER: Well, you know, it's an interesting question. If you look at the way that a lot of people have gotten rich in America, for example, what you find is that it's surprisingly common that the way they did so was by owning a business, but not a startup. So not like, you know, starting your next big tech startup, but owning a business of a type that tends to have some monopolistic pricing power. So maybe it's a car wash, but in an area where there aren't many car washes. And so they're one of the few available. And so that gives them some pricing power. Or a car dealership, again, where maybe there's some kind of local monopoly where, due to whatever rules are in place, there can't be other car dealerships nearby. So that's just if you actually just look empirically, a lot of people who have made, let's say, something in the multi-millions of dollars have done it through means like this. If you look at people who are very, very wealthy, usually they've done it by starting either a tech startup, or they've done it by becoming an investor professionally where they manage other people's money, you know; like Warren Buffett would fall in that category, or hedge fund managers would fall in that category. Of course, there are other methods for making money. But I think those are kind of the two most common ways that people get moderately rich, as in millions of dollars, the first case; or super, super rich, as the case with successful tech startup founders and investment founders. That being said, obviously, all of these things are really difficult, and there's absolutely no guarantee of success. It's just that those are probably the best bets if that's what you're aiming for, in my view.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: