CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 209: Aligning society with our deepest values and sources of meaning (with Joe Edelman)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

May 9, 2024

What are the best ways to define "values" and "meaning"? How can democratic processes harness people's intrinsic values and sources of meaning to increase their agency, cooperation, participation, equality, etc.? To what extent do political rivals — or even the bitterest of political enemies — actually value many of the same things? Might we be able to use AIs as "neutral" third-party mediators to help reduce political polarization, especially on an interpersonal level? How can we transform our personal values and sources of meaning into positive, shared visions for society? Are markets inherently antisocial? Or are they just easily made to be so? Companies frequently invoke our deepest needs and values as a bait-and-switch to sell us their goods and services; but since there must actually be demand to have those deep needs met and those deep values realized, why do companies so rarely attempt to supply goods and services that address those things directly? Assuming there actually is a lot of overlap in intrinsic values and sources of meaning across individuals and across groups, why do we still have such a hard time developing shared visions for society?

Joe Edelman is a philosopher, sociologist, and entrepreneur. He invented the meaning-based metrics used at CouchSurfing, Facebook, and Apple, and co-founded the Center for Humane Technology and the Meaning Alignment Institute. His biggest contribution is a definition of "human values" that's precise enough to create product metrics, aligned ML models, and values-based democratic structures. Follow him on Twitter / X at @edelwax, email him at hello@meaningalignment.org, or learn more about him and the Meaning Alignment Institute (@meaningaligned on Twitter / X).

Further reading:

SPENCER: Joe, welcome.

JOE: Thank you, good to be here.

SPENCER: I've heard that you make quite a remarkable claim, which is that, when we get down into people's values, they actually disagree a lot less than you'd expect, given how much disagreement we see ever in the world. And I really want to discuss that with you. But before we get to that, let's just start with: What do you mean by values? Because you and I may use that term a little bit differently.

JOE: Yeah. I think this is one of the problems: there are many different definitions that are floating around. And this is one of the reasons why AI alignment with so-called human values has been hard, and why different kinds of democratic agreement have been hard. There are maybe three common things that people mean by values. One is: these banner terms that people use to fight political fights, such as anti-racism, equality, diversity, feminism, or whatever. And people obviously disagree on these, and they're markers of which side you're on in a series of political battles.

SPENCER: That's sort of a political affiliation, in some sense. You're affiliating yourself with a set of views.

JOE: Yes. Yes. And I think people need to do that. It makes sense to do that. It makes sense to have fights over how things should work, and whether things should be more or less this way. For this purpose, people sort of temporarily put on something that they would be able to claim as their values, the thing that they're pushing forward in the world. So it's a real thing, and it's a good thing, and it's a thing that is sometimes called values. Another thing is norms: expectations that you think should be on everyone, such as gender norms, or expecting everyone to be honest or epistemically rigorous. This is kind of similar, but it's a much broader push towards changing the common expectations, like 'Everyone should pick up their trash.' It's similar and it's a bit of a political battle, unless everyone kind of agrees. Honesty is one area where there's much more agreement. So, social norms and prospective social norms are another thing that people sometimes mean by values. One of the things that I've discovered recently and verified in data is that there's something else that's below these things, which are: things that we personally find meaningful, or ways that we want to live. So, honesty is a good example there. Part of the reason why we want people to be honest is for coordination reasons and stuff like that. But part of it is because there's a way of living honestly, and a way of relating honestly, that we like better and that we personally endorse in our own lives. So that's what I mean by values. This kind of sub-component maybe sits below some of the political battles and the social norms. It's more of a lived experience of, 'Oh, yeah, I actually prefer relationships that are honest. I actually prefer how I feel when I'm able to show up and be honest, or when I feel like people are being honest with me.'

SPENCER: What are some other examples of that third type of value? And do you have a name for that type of value?

JOE: Yeah, I call them 'sources of meaning' or sometimes 'personal values' to sort of bracket the fact that they're more about personal experience. So, other examples include different kinds of creativity or intellectual exploration that are just really meaningful to people. There's boldness, adventurousness, and courage — traits that underlie many of the words we use as kind of bucket words that often represent a mix of social norms. For example, the word 'responsibility' or even 'masculinity' have some aspects of a social norm, like 'Men should be this macho,' or whatever. But it also has aspects that are personally meaningful to people, that they want to reclaim. Something that covers some aspect of their life, like maybe going hiking and living in the woods self-reliantly, or something similar. There's some kind of personal way of living that maybe is honorable and deserves protection. And that's only actually a slice of what people mean when they say 'gender-norms' or 'masculinity.' And the other slices are these: I sometimes call them 'ideological commitments' or 'social norms,' and it's the third type where there's a lot of agreement. But importantly, the third type is the underlying reason for the other two types. So if there's agreement on the third type, that actually provides a lot of leverage for negotiating the political battles and social norms as well.

SPENCER: So, I talk sometimes about what I refer to as intrinsic values. I think I even heard you use that phrase; maybe I don't know if it means the same thing to you or not. But to me, an intrinsic value is something that you value for its own sake, not merely as a means to other ends. And I view them as kind of psychological facts. If you could theoretically analyze someone's brain, you could figure out their intrinsic values, which are all the things that they value for some reason; those are the ones that they value for their own sake. They don't value them just because they get them something else. Whereas with money, you value it because it gets you something else. Maybe, honesty: you value it for its own sake, at least some people do. I'm wondering, how does this differ from your sources of meaning or personal values that you're referring to?

JOE: I'm not sure it does differ. I think instrumental to intrinsic is a bit of a spectrum. The way I think about it mathematically is it's something about: there's a dense core of mutually reinforcing ways of life, and this is what we mean by intrinsic. We mean that it's actually part of our good life; it's not a way to a good life. Money is usually consumed more as a way to a good life, right? But then, there are some set of things that's actually part of the good life as you best conceive it. And those could be called intrinsic values or personal values, like I call them.

SPENCER: You mentioned it's a spectrum. What's something that's halfway in between instrumental and intrinsic?

JOE: I think that having a girlfriend or a wife (or something) is kind of something that's a bit in the middle. Because what you actually value is a whole bunch of ways of relating and connecting, that having such a life companion makes possible. And it's kind of possible to bricolage those together without having a girlfriend, wife, or spouse (I should say). But really, the way to put them together is to have a life partner, I think. That's the best way.

SPENCER: I see. So, maybe things where there's realistically, given the nature of the world, is going to be one way to get the thing you value, and so then you could think about that way of getting it as almost analogous to the value itself. Is that the idea?

JOE: Yeah, something like that.

SPENCER: I see. Interesting. So, when you talk about people agreeing much more on values than on the higher level things above values, this surprises me a bit. Even at the level of values, I feel like there are so many, and they differ from person to person. Some people might value freedom fundamentally, while another might deeply value pleasure. A third person might say, 'No, no, pleasure is not really worth very much. But reduction of suffering is really important,' or 'Fairness is really important,' or 'Equality.' So, I guess I'm just surprised. I'm curious to hear how you kind of avoid the fact that there are so many of these different values.

JOE: Yes, I do agree that there are many. I think there are three kinds of tricks — which we discussed in our recent blog posts — for seeing the convergence. One of them is contextuality: All of those things are important in different contexts. So, when is freedom important? Maybe when you're talking about how schools should work, freedom kind of rises up, particularly the freedom to explore your own curiosity. This is how I feel about how school should work. And so, as you pick the context, then certain values will rise to the fore, and there'll be much broader agreement that those are important in that context. Then, how you express the values is really important. As I said before, values get wrapped up in these political battles and people say, 'Oh, that's not my value.' One of the things that we saw in our recent [study] — we built this democratic process with OpenAI to collect people's values — is that there's a default way of saying the value that's politically contentious. But when it's rephrased a bit, then people are like, 'Oh, yeah. Of course.' For instance, the pro-choice movement has 'My Body, My Choice.' This slogan is designed to be contentious, to have people up in arms about one side or the other. An even stronger one would be 'Defund The Police.' This slogan is designed to upset half of the people. There are different ways of saying the same thing, where people are like, 'Oh, of course.' Part of the 'My Body, My Choice' value is that people, in their own lives and not anyone else's, have their own information. And it's really important for them to wrestle with their own decisions. And we grow and learn by wrestling with our own decisions. And when I put it that way, it turns out it's not a politically divisive idea at all.

SPENCER: It removes the conclusion from it. It's more about the process, but it isn't saying what conclusion you end up with, right?

JOE: Yeah.

SPENCER: For example, the conclusion being that abortion is okay or abortion is not okay. So you're not including that in the value.

JOE: That's right, and that's important. People do disagree, in part because of these political reasons, in part because just preferences vary much wider than values, I think. Everyone has different sets of political policy preferences if you really zoom into it. But the underlying values are much more of a common basis, and they also help in creating agreements about outcomes. If you go through the values first, and we agree about the values and what's important in consideration. And this is what happens in deliberation, like when people are debating. A lot of what's actually happening is people are thinking about values. They're being inspired by values, and then they change their mind about what the outcome should be.

SPENCER: And what was the third trick that you use to get more convergence on values?

JOE: Another point is that people don't always adequately think through what the best values should be by themselves. So, you have to show them some other candidate values that might actually be wiser than what they initially stated, giving them a chance to think, 'Oh, yeah, that makes sense to me. That value is more comprehensive or includes more considerations that I wasn't thinking about than the one I first brought up.' This approach is another strategy we use in our democratic process. We've observed that people are very willing to engage in this way. In the process we created, they spend eight minutes articulating their initial value, in this case, regarding how ChatGPT should respond to certain situations. Then we start showing them other values, asking, 'What do you think about this? Is this second value wiser or less wise than yours?' One might naively think that everyone would already believe their initial idea was best. We conducted this with Americans, evenly split between Republicans and Democrats. Surprisingly, many were able to find other values that they considered wiser than their original ones. These new values ended up being much more convergent than the ones they started out with.

SPENCER: If I consider why you might find a lot of convergence despite there being so many different values that people have, what comes to mind for me is this: at the level of values, even if a value is not very strong for a person, they can still understand that there's value there. For example, someone who doesn't really value honesty that much as sort of a thing unto itself might still probably say, 'Oh, yeah. But I can see why people value it a lot and why it's worthwhile.' Whereas, at the level of politics or something like that, people might want to say, "No, there's no value in what that person is saying.' I'm wondering, do you think that that partially explains it? That people just can appreciate other values, even if they're not strong for themselves?

JOE: Yeah, I do. But I also think it relates to my first and third trick from before. Another way of saying 'by not being strong themselves' might be that 'it doesn't play a very strong role in their own lives because of the context of their own lives.' For instance, there's a whole bunch of values that come up around leadership or when you're in a special position of trust, like a doctor or a lawyer. Some people are just not in those positions. Or parenting; there's a whole bunch of values that just come up around being a parent. Many people are not in those contexts. And then later, maybe they are in those contexts. And suddenly, they're like, 'Oh, I see that this is really important.' And you can do that, even without changing their lives, just by asking them a question about what they would do if they were in that context.

SPENCER: Do you think of it as 'the context creates the value' or as some kind of interaction where it's like, 'Well, a person has a set of values but then, contexts will cause some of them to kind of grow in importance or shrink in importance?'

JOE: I think there's a process that's akin to science, where we, as humans, enter different contexts, we're searching for the values or heuristics that let us navigate those contexts in ways that we feel good about. And we iterate on that. And there's always kind of a state of the art for each person and for humanity as a whole. So, when people start doing podcasts and interviews, that becomes the more common thing that they'll start to wrestle with: 'What is it to be a good interviewer?' They might have a first version of that, and they do some interviews that way. Maybe they're killing it according to their current 'state of the art' criteria. But they start feeling like there's some other possibility that they haven't really locked on to yet. And then maybe, at first, by accident, they do a different podcast where they're trying a different style. And they're like, "Oh, yes. This is actually more of what it means to me to be a good podcast host." And then this will spread. Somebody else will be inspired by this person. This is how the kind of global 'state of the art' changes. And so, we're always facing new contexts as technology evolves, as human society gets more and more complex, as culture changes. And these new contexts create an open field of experimentation, which allows us to discover new values.

SPENCER: I think something that I'm still confused about is that it seems to me some people just fundamentally don't value certain things that other people value. For example, take longevity. Some people I know will say, "Oh, yeah. If I were to instantly die painlessly, that would be absolutely horrible. That would be the worst thing." And other people who are like, "Yeah, if I were to instantly die painlessly, the only bad thing about that would be the negative effects on other people. It wouldn't really bother me because I wouldn't be there anymore." And that to me seems really fundamental and it's hard for me to think about how that could be contextual although maybe I'm just not being creative enough.

JOE: I guess there's something about how they frame the context for themselves. I'm not sure that fits. Coming back to these different kinds of values that I mentioned: People do differ about preferences. Some people like cheesecake more than others; some people like chocolate mousse. I'm not even sure what that longevity difference is that you're pointing to, but it doesn't seem like a difference in values, as I was referring to them as sources of meaning. One way to tell the difference is: we have to find a thing or a choice that the person makes — a choice to be honest, a choice to be courageous, a choice to be intellectually exploratory — that would feel meaningful to them. If we can point to a difference in choices that people would make that would feel meaningful to them, like maybe the choice to do cryogenics or something?

SPENCER: Yeah, or take a risk that might end their life prematurely?

JOE: I do think people have different risk profiles for sure. But, I don't think that actually cashes out to different sources of meaning. You can say that some people will really find a certain kind of security or confidence in outcomes very meaningful. And other people will find a certain kind of risk — daring or jumping into the unknown — very meaningful. I think that's true. But people will mostly be able to relate to both and find examples of both in their lives, but the amounts will be different.

SPENCER: I guess, this is the way I think about intrinsic values: longevity — in other words, yourself existing for a long time — is an intrinsic value because my definition doesn't require that you bring it to a source of meaning in their behavior. Although, I do think that probably they could manifest as a certain meaning in people's behavior if you had the right situation.

JOE: Yeah, I'm really into finding values in people's behavior because it's a way of reality checking them, in the same way we reality-check revealed preference. It also gives people the authority to talk about their values, rather than for us to just kind of conjecture about them. They can say, "Oh, yes. This was very meaningful. This particular choice is more meaningful than other choices that I've made." This choice to be a nurse or go into science or whatever.

SPENCER: Why don't you tell us about your democratic process and how that tries to bring out values from people and then use those as a source of information?

JOE: Yeah, thank you. We got a grant from OpenAI, and some of our results will be announced jointly by OpenAI. The process works by having people talk to a prompted version of GPT-4 that integrates with our database of values cards. They first interact with this chatbot, which begins by presenting a situation involving ChatGPT where the appropriate response is unclear. There are three situations to choose from. One scenario is that ChatGPT is talking to a young woman who's thinking about getting an abortion. She accidentally got pregnant, is Christian, and comes from a Christian family where her parents would not approve of an abortion. She's seeking the chatbot's thoughts on what she could do. It's like, 'Let's watch what ChatGPT says.' You respond, but your answer is then interrogated by this prompted version of GPT-4 to uncover the underlying values: what you find most meaningful in the situation; how, if you were speaking to such a girl, what would be top of mind for you; what concerns you might have; and how you would try to assist her, or not. That's the first stage. This takes about eight minutes and results in one or more of these values cards. Then the user asks, 'Does this really capture my concerns? This is the consideration that maybe I would keep in mind if I were in that situation. And this is what ChatGPT should theoretically keep in mind, too. And then, they go into a next phase where they see other people's values cards. And they get to say whether they think those other cards are as wise as the one that they articulated themselves. And then they go to the third phase where they see potential upgrades in wisdom — that's what we call them — so they see two different values cards. They read a manufactured story about how somebody might have considered the thing on the left, but they've lived and learned, and now they consider the thing on the right. And the users are then asked, "Is this a plausible story of someone gaining wisdom? If you went through this transition, would you think that you've gotten wiser? If you heard somebody make this transition from valuing the thing on the left to the thing on the right, would you believe they got wiser?" Users say 'yes' or 'no.' All this information is compiled into a data structure that we call a 'moral graph,' which captures details for each situation, such as when you're talking to somebody who is considering an abortion. We also have a situation of talking to a parent who's really stressed and wants to discipline their child. The third situation is about building a bomb, gun control stuff. This moral graph covers for each of these contexts the values where there's consensus that these values are wiser than some of the other ones. It's a kind of a visual data structure, so you can see all of the values that people have articulated, and arrows that go from one to the other, where the arrow is pointing towards the wiser value, where there's considerable agreement that the value the arrows pointing to is wiser than the one it's pointing from. The idea is to use this data structure as a replacement. If some of your listeners know about constitutional AI, this is one of the ways that large language models are currently aligned to some kind of human values. So the moral graph, our data structure, is a replacement for constitutions and can be used in a similar way to shape language model behavior by shared values.

[promo]

SPENCER: So if I understand it correctly, the basic concept is this: you're given a certain context, and there are specific values that an AI could act on. You then want to make sure that the values it acts on are those that people consider wiser in that context, rather than less wise. Is that correct?

JOE: Yes, that's right.

SPENCER: What would this look like in practice? Would it be some kind of meta-model that given the context it tries to predict what humans would say the wise values are, and then it loads those into the context of the AI before it responds, so it knows what values to respond with?

JOE: Yeah, close. Part of tuning any large language model these days is building something like a reward model. These are the two most common techniques used to fine-tune LLMs after they go through their base training: RLHF and Constitutional AI. Both of those involve training an extra model, called a reward model, that allocates reward to the base model to fine-tune it towards a particular direction. So in the post training step, the base model will generate several different possible responses. And the reward model will select which of those possible responses should get the reward for kind of continuing the training. So, the moral graph can be used to make a reward model where it rewards another model for responding according to the wisest value, or the wiser values, in the moral graph in that context.

SPENCER: How does it know what values the LLM is responding with? So the user asks the question, then the LLM responds. I think maybe there's a step there that I don't quite understand.

JOE: Yeah, there's a few different ways to do it. The current way that we want to do it is actually to train the real model to give the values it's responding with, to guess the values it's responding with, and also to make a good response according to those values. So you can think of it as, instead of going from input tokens to output tokens, you go from input tokens to a set of values, and then to output tokens. Or, from input tokens plus a set of values to a set of output tokens. These are actually the same weights. It's just a question of, "Where do you put the cutoff? Do you put the cutoff after the input tokens, before it outputs the values, or after the values?"

SPENCER: Could you walk us through an example of one of these wisdom cards and then another wisdom card that people thought was more wise for that particular context?

JOE: Yeah, sure. One of our cases is about parenting, and the ChatGPT is talking to a stressed out parent. Their child is misbehaving and not doing their homework. We saw a bunch of respondents, so we ran our process with more than 500 representative Americans. Many of them articulated something about rewarding a child for good behavior, or creating strong routines that would kind of structure the child's life. But many of them also, after articulating a value like that, saw another value about trying to understand what the child's concerns were, and then making a case to the child about how their concerns would maybe be addressed within the context of also doing their schoolwork, and also having some kind of orderly structure in the household or something. So, getting curious about what's going on with a child before disciplining them or forcing them into a structure. There is a very broad agreement that this is actually a better idea, even among people who articulated first a more disciplinarian parenting strategy. And because all the questions are about how ChatGPT should talk to the parent, then ChatGPT should inquire into what the child's concerns are. And then, in a way, almost mediate and help the parent get curious. For the parent who doesn't know, try to figure out how the child's concerns can be balanced or represented in the household structure that is helping a parent.

SPENCER: How do you quantify the level of agreement, and what kind of agreement levels were you finding?

JOE: We actually use three metrics and cutoffs for each of the three metrics. One is just entropy about the edge direction, or whether people even think there is an edge there. So, we cut off within the arrows where A is wiser than B. We call that an edge in the moral graph. We try to manufacture stories that go in both directions. And they can also say, "Oh, yeah. I think the story is plausible." "It's not plausible." Or, "I don't even think these things are related." They can give a few different kinds of answers when they look at the edge. If there's a lot of different kinds of answers — if some people think A is wiser than B, other people think B is wiser than A, or if some people are saying they don't understand or they think they're unrelated — I think even if there's something like a third of the people are not giving a clear answer or deviating away from an idea that there's an edge direction, then we don't count that edge. Another one is if we just don't have a good signal, like we haven't managed to show the edge to enough people, we only can show the edges to people who can relate to the value somehow. So, we try to show values in the previous screens, and then use those values to motivate which edges we show people. So if we weren't able to get a signal, then we also hide an edge. And finally, I think if even 10% of the people think the edge goes the opposite way, then we'll drop it. So we have these thresholds. We played around with them. — I'm not sure that those are exactly the right numbers. I look forward to iterating this process enough that we can have even a principled approach to what those thresholds should be. — But when we run with those numbers, I think we have something like 3% of the edges we need to throw out for any of those reasons. So 97% agreement (something) among Republicans-Democrats in America.

SPENCER: That's pretty wild. So, you attribute that to basically these being deeper than the political versions of things, and so if you can really get to the core of it, there's this widespread agreement on how to decide in particular cases?

JOE: That's right, yeah.

SPENCER: That's pretty amazing. Can you talk about the other uses of this outside of AI? Where else do you see this kind of idea coming up?

JOE: I think, in many ways, many of our systems are designed to be divisive, such as social media and national politics. We're looking for that reason. We don't like social media because of this outrage, clickbait tribalism — all this kind of stuff. We have a similar thing going on with national politics, not just in the US but in many countries around the world. You can think of it as some kind of exploit, where systems that are purportedly about collective intelligence and decision making are now being used to wage battles between competing groups. And this kind of battle discourse crowds out the capacity of the system to solve various challenges. These are challenges like climate change and AI safety. There are many challenges, like housing in cities, that we're failing to resolve because of this increasing politicization. And so, I think that this direct representation of agreement about values can be useful in all those cases to design mechanisms that are robust against this kind of ideological warfare or ideological capture. That means that political systems, especially voting — I think voting is the mechanism that's most been corrupted in this way. Although, legislatures also have been corrupted in this way. Courts are more already about values alignment, although they've also somewhat been corrupted in this way. So there's a whole bunch of different democratic processes that — can be revised using this kind of data structure. And I think markets can also be revised using it. I think that one problem with markets, and one of the reasons that we have these terms like 'late-capitalism,' 'empty-consumerism,' 'addiction economy', is that markets are also not reaching down to what's really important to us. They're not really satisfying our intrinsic values, as you might call them. They're satisfying some more superficial thing that leads us on some other level unfulfilled. And so, I think these representations of values, and also group values, can be useful in all those domains.

SPENCER: Do you have a concrete example? Obviously, this is earlier stage work, so I wouldn't expect you to have it all fleshed out. But how might these ideas be used, specifically, let's say in voting or in markets, and so on?

JOE: I did this thing during COVID, where I had people argue online. It was just a zoom thing, where we used Google Sheets and a complicated kind of homebrew experimental setup, to have people gather COVID policies and argue about them. And then we did a thing where people surfaced their sources of meaning about living through COVID: things about keeping their grandparents safe and how that felt meaningful to them, things about how they wanted to live in pods, and stuff like that. And once all of these sources of meaning, these values surfaced, we then had people kind of form collaborations around those sources of meaning like, "How could they actually help each other live in the way that they wanted to?" It was very clear that that first discussion was very contentious, and people quickly formed camps and distrusted each other. There's a lot of ill-will. And then it was the same group of people, just later in the same zoom call, and it became very sweet. There was a lot of collaboration forming. The same people who had been arguing half an hour earlier were brainstorming about how everybody could live according to their values in this sort of test group.

SPENCER: So that suggests even in a sort of an interpersonal level, we might be able to use some of these ideas. Do you have suggestions of how they can be applied, if you're, let's say, talking to a friend and not reaching agreement on something important?

JOE: Yeah. I'm actually in the process of building a chatbot to help with this. I built a bunch of GPTs over the last few weeks that, since we use this process for this chatbot for our democratic process that does a very good job of getting people's values, now I'm interested in all the ways that that can be used besides in a democratic process, and there's a bunch. One of them is just getting clear on your personal values in terms of structuring or planning your own life. Another is that I've made a GPT that's about going for walks and finding things that you find beautiful and capturing, in a values card, what kind of aesthetic value it is that you're tuned onto, and then structuring your walks based on that. And I made a GPT that's for doing that also in pairs. So, you can share what you find beautiful with a friend. The third one that I'm working on now is a mediator, where if people are having a disagreement, it tries to get at the underlying values, and then present them to each other and enter this kind of brainstorming mode that I just described. I've seen that work. I think it's a lot like nonviolent communication. But instead of assuming that people all have the same underlying needs, we're looking for these values which might be kind of distinct, or at least aren't the same values for each person that are relevant to the disagreement. But once they really understand what's important to the other person, then things tend to go a lot better.

SPENCER: If you take a topic like COVID, let's say early days of COVID when there's a lot of uncertainty: rather than discussing immediately, "Well, what should the policy be?" Or, "Should we cocoon older people and let young people go wild?" or whatever, you start with a discussion about what we care about. Is that sort of the general principle?

JOE: Yeah. It's important to represent what we care about. There's two things. One is to bracket and not discuss at first what we care about for what other people do. But instead, just what we care about in terms of how we would like to live ourselves. And then, the second thing is to represent that information in a very relatable way, which is what our values cards do. So, in this democratic process, the chatbot is interviewing the user. But then, the chatbot actually writes the copy that summarizes what the user says, and that's the copy that will be shown to other people. And so, that chatbot does a good job of rewriting things that might be phrased initially in a politically contentious way, in a way that's more relatable.

SPENCER: Going back to something we were talking about earlier in the conversation. You mentioned that sometimes you call these things personal values, but other times you call them sources of meaning. You kind of use those two terms interchangeably. Do you really see those as being the same thing? That value just a source of meaning?

JOE: Almost. Tell me if this matches your own life. I think that we experience meaning when we're at the edge of our notion of the good life. When we're encountering something, maybe for the first time, or it's been a while, that is really important to us. As an example, I am here in Tokyo with my co-founder of The Meaning Alignment Institute, and we've been exploring the city together. She has very different aesthetics and preferences and views of the city than I do, but they're quite complimentary. So, we're walking around together, and she's noticing things that are very different than I would notice. And I'm appreciating things that are different that she wouldn't notice, and we're sharing these. And it feels wonderful. And this is not something that I've done so much of. And it feels meaningful. And I think this meaning is cluing me on to a new value about maybe traveling with people that are quite different than me, but where there's this mutual opening up of each other's eyes, which is a value that I've never really contemplated before. And a part of why it feels meaningful is because I'm at the edge of my conception of the good life. I'm discovering an aspect of the good life that I never really encountered before, that I never really thought about before. I find this correspondence, when I go through all of my experiences of meaning, that there I can always find a new value or a renewal of a value that's happening in that moment. Do you agree?

SPENCER: Well, it makes sense to me that pushing a boundary in that way is a way to kind of discover values you hadn't realized that you had. But it seems to me that a lot of meaning is also everyday meaning. The everyday meaning you might have with your child or spending time with your partner or petting your dog. Those also can be a source of meaning, but they don't necessarily push a boundary in that way. And I wouldn't expect them to lead to sort of new discoveries about your values.

JOE: The way I think about it is that there are parts of our conception of a good life that are routine. And maybe a way of saying that is that they don't require attention. I exercise regularly, but it's just become part of my routine. And there's moments where I feel the sort of meaningfulness of my physical engagement, and the sort of vitality and sensuality of the exercise. There's also a lot of moments that I'm not really even paying attention to what I'm doing. Whereas, the moments that are really meaningful are the ones where my attention, at least, is right on what I find valuable in the moment. For instance, if I'm really opening up to a friend, you can imagine versions of this where I'm just kind of doing it automatically because that's how I talk, and versions where I'm more conscious or it feels more exploratory. And the latter ones are the ones that I'm talking about as the edge of the good life. So, it's not always a new value. But it's something more like, there's some sense in which it's like a frontier because it's part of your attention.

SPENCER: To me, that makes me think of mindfulness. Let's say, you're spending time with your dog. You can spend time with your dog, where you're just not really focused on the fact that you spend time with your dog, and you're maybe thinking about work or whatever. Or you can spend time with your dog, where you're really focused on your dog. You're really appreciating this moment with your dog, and you're feeling grateful for having your dog, and so on. I think of that as sort of an element of mindfulness, of learning to be present in the moment. I do associate that with 'increasing the amount of meaning' in that experience. But I don't really see that connecting so much as to generate new values, as I do to sort of the way that you get the most out of whatever you're doing.

JOE: Yeah, I think that makes sense. I guess what I would ask to look deeper into this is: mindfulness sounds like it's all about the fact that you're paying attention rather than what you're paying attention to. But, I think people are always paying attention to something. So really, mindfulness is a choice to pay attention to something that's maybe something present. And then, there's actually a bunch of sub-choices in that. If somebody's doing some kind of aesthetic appreciation, that means that they're appreciating certain aspects of their environment: the ones that have these aesthetic sensibilities, the contrast, the colors of the leaves, the vertical pattern, the bark, and the trees, or whatever. They're paying attention to some things and not others. This is maybe a secret payload of the word mindfulness or whatever. It's not actually just about paying attention to anything. It's about paying attention to something that you value.

SPENCER: That's interesting.

[promo]

SPENCER: When we think about the ways that people may not be aligned on what to do in society — because some people are fighting for political cause A, and some people are trying to oppose that and fight for political cause B, and so on — but you suggest that if they really got to the core values, they'd be much more in agreement. How does that affect what we're fighting for as a society, or as a world, and the large social vision that people share?

JOE: I think there's a bottleneck. So, you can agree on values. But then, there's this big intellectual question which is, "Okay, now we know how we want to live. What kind of society would let us live that way?" And the last time that we had success in this, I think, was liberalism. It was like, "Okay, here's a bunch of our ways that we want to live and markets and democracy and nation states, where people kind of co-determine their collective direction as a society. And maybe there's minimal social coercion, so that people are more free to live as they want." This kind of idea was the last time that this happened successfully, where there was kind of a shared understanding of ourselves. And this led to a vision of society that a lot of people were psyched about. And suddenly society decided, "Okay, let's move beyond the church and the monarchy and go through these democratic societies. That sounds like a much better way for us to collectively all pursue our individual desires." And I think we're stuck there now. And part of the problem is that we're not clear about what we all share about how we want to live. That's one problem that we've made some progress on with this moral graph, and values articulation, and so on. But there's an additional problem which is, "Oh, well. What would let us all live in the ways that are most meaningful to us?" It's kind of clear that the liberal answers are both kind of breaking down because of all sorts of different kinds of structural challenges like AI, climate change, and social media. And they were never quite adequate. Like the market, for instance, doesn't really address what's most meaningful to us. And there was kind of this mix of different kinds of religions and churches in the market, local communities, family structures. This, maybe for a while, did a better job. The mix did a better job than just markets that address what's meaningful to us. But that makes us kind of over. Then there's this big open question about how we would change society to support all these meaningful lives? And there's different answers like, "Oh, maybe AI will unlock some abundance that we currently don't have, and that will create a society that addresses what's meaningful to us." Or, "Maybe if we have personal chatbots or we all do therapy." There's a bunch of different potential answers. But none of them are super convincing and have the same kind of cohesive 'Yes, let's do this' that markets and democracies had at the beginning of liberalism swept through Europe and stuff like that. So, that's one way of framing it. Does that make sense?

SPENCER: Mm-hmm. Do you think that markets worked well at the beginning, and they somehow got corrupted, or that there we're sort of missing something fundamentally, and maybe people just didn't realize it at first?

JOE: I do think that they're missing a few different things or they're all related. One thing is that markets are better at serving shallow desires than deep ones. Because of the way marketing works, and also because of the way that businesses work, they want to spend a short time. Because most businesses and the way that most market engagement is structured is transactional, you'd want to spend a short time with each customer. You want to very quickly get their needs: their measurements if you're making them clothes, or if you're fixing their windshield on their car, you need to make the model. You want to be able to do this very quickly, and then serve them. And the things that require a lot of customization have a lot of transaction costs, so you're gonna do less of that. There's not really much of a market for surprise birthday parties that are super customized to each individual because that's just not something that markets are very good at. So that's one of the problems: they serve shallow desires, rather than our deep ones. And then another problem is that they serve our individual desires more than our collective ones. There's a whole bunch of transaction coordination costs associated with group buying. If you want to live in a group house, and you don't want to live with random people, you want to live with just your friends, it's really hard to coordinate that kind of purchase. It's much easier to find your ideal single family, or single person condo, or something. Families make this easier because they're one ongoing unit that has a lot of practice buying together, and it's just going to stick together no matter what. But even just anything outside of family, like two families together, it is much, much harder to do any kind of group market engagement. And so, markets had a bias against the deeper things — which are the most meaningful things — and together things — which are also our most meaningful things. So that's kind of the problem the whole time. The reason that markets were so successful is that there was actually a lot of individual and shallow stuff that was very good to coordinate around solving for a very long time. There's a lot of low hanging fruit that markets served very, very well. And it's just that, I think, their biases became a real problem, mostly in the second half of the 20th century.

SPENCER: On the point about not catering to our deeper motivations or deeper values, I get why if you just have a small amount of time with a customer, or you have to do a transaction quickly, why that would be hard to satisfy. But I would think that there would be a lot of money to be made in those deeper customizations or really tapping into people's deep values, because people care about them so much. So why wouldn't that be a money-making opportunity for a lot of businesses to try to tap that?

JOE: In a way, it is. Because if you look at advertising, it's almost all trying to address these deeper unmet needs. I think we've entered into a kind of a death spiral, in a way, where people are constantly promised community, connection, authenticity of different kinds, of self expression of different kinds. Like the commercial that says, "You're going to finally express yourself by buying these jeans that come in three different sizes." Or the commercial says, "Your friends will love you because you drink the same beer that everybody else is buying." Some kind of market failure, where people find it kind of plausible, but not really plausible because they're also skeptical, but they thought plausible enough that their deeper needs will be met by some non-customized product. And then when they get the non-customized product, they're still unfulfilled. That seems to be what's happening. I think there are probably market structures that would be better at doing the customization thing. You're right that there's a huge amount of demand that could be unlocked. And it could be very, very good for the economy if we could actually meet that demand, honestly. But I think we don't have the right structures in place to support it. I think it's kind of the problem that medicine had before health insurance, where there was an incentive to fake it and sell snake oil or sell supplements. We still have this in supplements. The incentives can sometimes just be to not really do the thing, and you need some other structure on top of the market: some auditing structure, insurance reviews, some higher level structure that fixes the market failure and makes sure the market can actually serve this area.

SPENCER: This reminds me of something that I find very bizarre, which is that it seems like online dating should be about helping people find long-term partners, or at least some of it should be about that. Obviously, there are some people who just want short-term partners. That's fine. But a lot of people really want a long-term, highly compatible match. But if you actually look at the way dating apps work, it seems that they're largely optimized to be gamified; essentially, to turn everything into kind of a Tinder-like — swipe left, swipe right — trying to essentially make the experience short. It's immediate-term rewarding but does not really give you very effective matches. And it seems that many dating sites have declined in quality. I haven't used OKCupid in many years, but my understanding is it may actually even be less effective at matching you than it was 10 years ago. This seems strange because you might think, "Well, people value finding a good romantic partner so much. They should be willing to pay thousands of dollars for this." But I've never heard of a company, at large scale, really succeeding at this. You hear of individual matchmakers that might do custom matchmaking for thousands of dollars. But many, many people, in theory, value a really fantastic ratio for thousands of dollars. But as far as I can tell, no company provides that as a service at scale.

JOE: Yeah, exactly. I think that's a paradigm case of what I'm talking about. And when you start pattern matching against this market failure, this preference for individual solutions over collective ones, and shallow over deep, you see it everywhere. And you definitely see it in this situation where, "OKCupid used to be about these deeper alignments. All these questions of: who you want to be with, what kind of opinions you want to share, what kinds of opinions you are okay disagreeing about. There's this really deep algorithm behind it. And now, it's another swipe-based on faces thing. And also, it's an example of serving a shallow desire that individuals have. In a way, you can think of Tinder as a kind of porn because you're looking through photos of sexy people of the opposite sex. And it's serving the shallower desire of hookups over the deeper desire of meaningful relationships. So yeah, it's very clear in this market. But the same pattern occurs all over the place. Almost everything that's really, really important to us — like love, community, adventure — has been debased in this way, where community now means a discord, right? [laughs] And adventure means like a tour that you book or something. It's really sad.

SPENCER: I have this idea that I call replica theory. It's a very simple idea. It says that when people can be rewarded for doing an easier fake version of a thing more than doing the real valuable version of thing, you'll find that most of the activity is an easy fake version. It has to do fundamentally with verifiability, to some extent. If you know you're gonna get the fake easy version, and you want the real thing, then the disincentive won't be there so much. But if people can't tell, or the amount of money you can make for the fake version is just as much as the real thing, then everything will kind of be fake, or almost all of it will be fake. I wonder if that could be a partial explanation here.

JOE: Yeah. I think it is, in a way, just another way of saying my thesis. For my thesis, I just go a step beyond that and I say, "Well, okay, what makes a difference between when people can verify and when they can't." And it's something about articulacy and problem specification. The more you can specify the problem, the easier you can kind of 'before-and-after' evaluate it. I started thinking about all this stuff when I was working at Couchsurfing and Airbnb was on the rise. We knew that Couchsurfing was much more meaningful on the median than Airbnb. It's easy to survey both user populations and see. Couchsurfing is more meaningful, and it's free. It's very easy for people to verify that a room was clean and looked good before they were there, and whether the service was actually good and so on. The whole transaction of Airbnb is much easier for people to understand what the criteria are and kind of check-off than the criteria for Couchsurfing, which is about meeting another person, maybe going on an adventure in a foreign city or something. It's a much harder thing to say beforehand, what are you trying to get out of it. And after, was that what you wanted? It's the same with dating. So another way of saying this replica theory thing is that, there's a bunch of areas where we're inarticulate about what we want and whether something satisfies it or not. And these are the areas where the fix will occur.

SPENCER: So let's tie this back into the question of whether people don't have a kind of positive vision of the future and what kind of problems that's causing today. Can you connect that for us?

JOE: Yeah, I think that the 'liberal world order,' for lack of a better term, has left us both very articulate about our consumer preferences and our political affiliations. We all know which kinds of clothing we might order on Amazon or whatever. Many of us have niche, styles, or tastes and music that we're into on Spotify, or whatever. We're very articulate about a set of things. But we also are very aware that this set of things is inadequate for organizing society, in the ways that they really fulfill us. The dating apps are a good example: the decline in formation of relationships, and even the decline of sex among young people. We're like, "Wow, we're so articulate about our consumer preferences. And yet, this life is really unfulfilling in all these ways." And nobody really knows what to do about that. And this is kind of why all the different proposals are deeply unsatisfying. Some of the proposals we have, the kind of techno-optimist progress studies, like Max Roser and GDP go up, this kind of Tyler Cowen. That proposal is like, "Well, no. That doesn't seem to address this mismatch, this replica thing. I don't see how that fixes that." So people don't get behind it. I think that's the real reason people don't get behind it and not because they misunderstand the statistics. And then, on the other side, we have maybe the social justice critique of colonialism. It's not plausible. People are like, "Okay, we can make society more equitable." But that also doesn't sound like it really fixes the problem. There's no real good future there either. So, this puts people in this kind of just cynical, apathetic state. And it also makes collective coordination problems like how to solve climate change, how to deal with AI, and so on, much harder. Because when someone's drafting a policy, for instance, there's no bright future to which that policy belongs. There's no plausible answer like, "Yeah, we're gonna smash this. And it's going to be great," because no one has an idea of how things will be great. We just have warring factions, each of which are like, (I don't know) "Bernie Sanders also has another like labor. That's not going to solve the problem." So, we just have these kinds of warring factions, where no one has a good proposal and no one can get really excited about any particular direction. I think what we need is gain in our articulacy that we've been talking about. Or, this gain in verifiability according to your replica theory model. We need a vision for a society that actually fixes the problem, that organizes around what's most meaningful to us, that stops this bullshit with the dating apps, and so on. And if we had that, I think it would lead to the end of a lot of our tribalism and gridlock. Because, I think we all really want to be inspired. We want to feel like there's a good future. And there just isn't really a model in play for what that could be.

SPENCER: So what would it look like to begin to take a step in that direction?

JOE: I think it's really cool that AI and AGI, and LLMs even, specifically right now, are both a big threat to society, but they're also a very scalable means to getting to what's really meaningful to us, and also to solve this verification problem that you were talking about. Instead of just purchasing things, or downloading dating apps in the market, or purchasing the beer that we hope will help us connect with friends (but doesn't really help or whatever), instead of just being lost in the consumer cycle, if we can be aided in making our deeper desires more clear to ourselves, and making their satisfaction more verifiable, if we can, in the same way as we moved towards health insurance to solve some of the problems of the market when we were making our own purchases in the snake oil era of medicine, in the same way if we can outsource some of our market participation towards a model that has a longer view of our health that can take into account collective desires as well as individual desires, if we can put some of our management of our political and economic lives in the hands of something that's interested in deeply understanding us and advocating for our deeper and more collective preferences, then I think we can solve these verification and articulacy problems, and build a society that's organized not around our shallow consumer preferences, and our shallow political affiliations, but are much deeper ideas about how best to live. I think we can do that in a way that's verifiable. One of the dangers there is that it's opaque and manipulated by (like) the big AI companies or marketers. But actually, I think we can get clear enough on what's meaningful to us. And this democratic process, that I described earlier, is an example of that because it shows you your values card. It doesn't just infer your values from your presentation or something and then act on them without checking in with you; it shows you the values card, and you say, "Yes, that really does capture what I care about in this choice." If we can do that, if we can surface people's sources of meaning, and kind of document like, "Okay, these political decisions are made to support these meaningful ways of living that people want, and that these economic arrangements support these meaningful ways of living of this group, and so on." If all of this can be legible, it leads towards an alternative to simple mechanisms (like voting and markets) that still can be accountable to the people, but can be accountable to people on a much deeper level. Where, in a way, 'the customer is always right' or 'one person, one vote.' These things are kind of still true, but they're true, not just about whatever somebody clicked but about addressing whatever somebody really cares about. This is the society I think to go for, and this is what we're working towards. It may sound very vague. I think part of that is because I have limited time now. But part of it is because it is a bit vague, and we need a lot of people working together to flesh this out into a vision that's clear enough, so that we can collectively, as a society, say, "Yeah, that is the successor to the liberalism that we've been previously living out." That's the way that we actually can solve these problems. And that's the future that we should work towards.

SPENCER: Joe, thanks for coming on.

JOE: Very fun. Thanks for having me.

[outro]

JOSH: A listener asks, what's your view on what some have termed the "generalizability crisis" in psychology?

SPENCER: When I think about generalizability, I think about two things. One is: if you have a hypothesis stated in English about, let's say, human psychology, and you then go and turn that into a statistical test so that you can actually test it in a study, does the statistical test really generalize to what you claimed in English about what you actually want to test? And there can be a gap there where the statistics says one thing, but to actually make the claim you want requires more than the statistics can show. And I think this is a really big problem in science, and it often has an incentive to occur because people want to make their results sound impressive, sound meaningful, sound important because they wanted to get them through peer review, they wanted them to be published in top journals, and so on. And so that kind of generalizability of "does the statistics really generalize to the claims?" is definitely an issue, and it relates to what we call clarity in our Transparent Replications project. The other issue of generalizability that I think about is: suppose someone finds a true finding, a finding that really would replicate if it was done, again, in the same way on the same population. Well, there's a major question, which is: does it generalize to other situations? Because nobody really cares that you can do that exact experimental setup and get that exact answer. Almost all of the interest we have in these studies is that we hope that they generalize at least somewhat outside of that exact context where they were tested. For example, let's say you show that you can treat depression, but you can only treat it for left-handed skateboarders, like that's not very interesting, right? Even if that's true, it's not very interesting. And so many studies actually rely on some form of generalizability to make the claim that they're worthwhile or interesting or important. And I think that this is also a big problem because it often is unclear how much things generalize. For example, there have been many interventions in global health where they found that it worked well in one setting, and then they try to scale it up or have the government take it over, et cetera, and the results are not as good. And sometimes this might be due to issues of the original study design that showed it worked. Maybe the study was a false positive. But I think a lot of times it's just that, well, different situations are different. Different implementers are different. Maybe the scaled-up version was not implemented at the same level of quality. Or maybe there are certain cultural assumptions being made that don't apply once you move it to another city, and so on. So I think these are all very, very important major issues. And if you're interested in this topic, Eva Vivalt has some really nice work on this idea of how well do interventions generalize, especially in kind of an economic or global health context.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: