Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
April 18, 2026
Is dishonesty best understood as a permanent feature of human nature or as a condition that worsens when incentives and tools change? When new technologies make cheating easier and detection harder, do they merely reveal existing character or actively reshape it? How much of moral behavior depends less on values than on friction, surveillance, and the perceived odds of getting caught? Is the deepest threat of AI enabled cheating that people deceive more, or that they stop believing sincerity can be known at all? If most people are not chronic liars, why do so many people still cheat when the opportunity is clean and the cost is low? Do people mainly avoid dishonesty because they are virtuous, or because they want to preserve a workable image of themselves as virtuous? Why do so many moral failures seem to stop at the point where self justification breaks down? If people cheat only a little, is that evidence of conscience or merely evidence of strategic moderation? Why do reminders of honor, vows, and identity sometimes reduce cheating even when enforcement is absent? Does honesty depend less on abstract principle than on whether a situation activates the right self conception? How much of morality is really a contest between temptation and the stories we need to tell ourselves about who we are? If truth telling is cognitively easier than lying, why are human beings still so vulnerable to deception? Do we default to honesty because we are moral, or because truth is usually simpler, cheaper, and less mentally demanding? If we are biased toward assuming others are truthful, is that a moral achievement or a practical shortcut that civilization depends on?
Links:
Christian's New Book: The Honesty Crisis: Preserving Our Most Treasured Virtue in an Increasingly Dishonest World
Christian B. Miller is the A.C. Reid Professor of Philosophy at Wake Forest University. He lives in Winston-Salem, North Carolina with his wife and three children. His research primarily has to do with virtue and moral character, and he is the former leader of The Character Project, one of the largest research projects in the world on these topics.
SPENCER: Christian, welcome to the Clearer Thinking Podcast.
CHRISTIAN: It's great to be with you. Thank you so much for having me on your show.
SPENCER: Is there an honesty crisis?
CHRISTIAN: I wrote a book with that name, The Honesty Crisis, so I guess I think so. Although honesty is important and we care about it, I see it eroding today in all kinds of ways, in different areas of life, maybe more so than it has in the past. I'm trying to write this book to call attention to this erosion, the way in which honesty is in crisis, and not only call attention to it, but to try and do something about it, to try and preserve what I think is our most cherished value.
SPENCER: So what's an example of how it's eroding now?
CHRISTIAN: Dishonesty has always been with us. I'm not going to say that anything is brand new. What I'm going to say with this honesty crisis, and I'll give you an example in a second, is that things are getting worse than they were before. One example of this is student cheating. Since I'm a professor, it's kind of near and dear to my heart how my students are doing, and how they're performing academically. There's been cheating in the classroom for as long as there have been classrooms. That has accelerated about 20 years ago with the internet, because students were able to go online, find material that they could put into a graded assignment and not cite it. They would plagiarize from the internet, turn it in, and get a better grade, or so they thought. More recently, I think we are all familiar with this more recent crisis: AI has been prevalent among students completing graded assignments. The idea is they are able to cheat now more easily than they did before, and it's harder to detect. It's easier to do and hard to detect. Now, as a professor, I have to worry about how much of any graded assignment is my students' contribution and how much of it is AI's contribution to the work. This applies to graded papers, which is my area. I'm a philosophy professor, but it also applies to problem sets, coding assignments, and take-home exams. It's a massive problem that we didn't have 10 years ago, and it's on top of a problem we didn't have with the internet 30 years ago. It's an area where honesty is eroding.
SPENCER: One might think that there are students who are either willing to cheat or not willing to cheat, and it shouldn't really matter what the technology does, because if you're not willing to cheat, you're not willing to do it. But it seems like that might not be the case. There might be more of a thing around what's the probability of getting caught? And maybe even more importantly, how normalized is it? If you see all your friends doing it, maybe you don't think it was such a bad thing. So what do you think is actually driving potential increased cheating? Is it a norm change? Is it just the barrier becoming lower? Is it becoming harder to get caught?
CHRISTIAN: Yeah, it's all of the above. So I don't think I want to choose between those options. I want to say it's all of the above. On the one hand, it's easier to do it. Before, you had to go online, you had to search around, you had to find the material that was relevant. Now you can just type in some instructions and get the answer that you're looking for. It's also more prevalent because you think, "Okay, I can do this so easily. My friends in class are probably doing the same thing, and if I don't do it, then I'm going to fall behind them. My grade is going to be comparatively worse. So I just need to do it to keep up with them." Furthermore, here's the third element to this. It's a lot harder to detect. With the internet, you had the threat of the professor going online, Googling the passages, maybe finding or using a Turnitin kind of website, and finding where the source was. These days, the detection has not caught up with the output, so we don't have the detection software that enables me as a professor to reliably track down where this is coming from. Even if I'm suspicious, I can't prove it, and therefore I can't take it to an honor committee or have any kind of punishment without proof. The students know that. The students know that they're going to be able to mask it, and so all those factors are playing into it, which makes it much more tempting, much more appealing, and, from my perspective, much more frustrating and discouraging.
SPENCER: You would think, because this is a huge issue at many universities, that they would provide resources to teachers to help solve the problem. Is it that they're not making an effort to solve it, or is it that there really are no good solutions?
CHRISTIAN: Yeah, so I think universities are worried about it. Administrations are worried about it. My department, other departments, we're all worried about it. It's more of the latter. What you said, just, "What are we going to do?" So there are, broadly speaking, three things you could try. You could just give in to it and embrace it and say, "Have at it. It's open season." I think rightly, most universities don't want to do that, and I can explore why. The second is to try to convince students not to do it. So make your strongest plea, make your strongest case to them to try and convince them not to do it, but that's really hard as well. The third thing, and here's what a lot of universities are doing, and they're kind of honing in on this collectively, and administration is supporting it too, is take it out of the hands of the students. By that, make it such that graded work cannot be AI. How do you do that? Well, you have to bring it into the classroom. So what does that look like? Well, it could take different forms. I could hold up — I don't have one in front of me. "Oh, yes, I do. This is worth holding up." The old school blue book hasn't been around for a long time. Guess what? They're making a big comeback, and I'm using them in my classes. So have a major chunk of the grade be exams, traditional exams, or, if you don't like that, do a lot of the writing in class. Devote entire class periods to writing spontaneously there as a class together. Or a third option that's being used by many professors is oral exams, something that wasn't very popular, say, five or ten years ago. That's making a kind of big comeback, but that's quite popular too. I personally don't do that myself because it's really onerous, it's very time-consuming, and there's a lot of logistical scheduling involved. But others do, and they're saying that it's successful for them. So that's pretty much where a lot of us are heading, especially at the intro level for introductory courses, when we might have, you know, I have 30, but some people have 50, 75, 100, 200 students. Forget doing take-home exams. You're not going to be able to convince the students not to cheat in that kind of setup. So you bring it into the classroom.
SPENCER: Something that really bothers me is, my understanding is that there actually is technology that AI companies could provide that would make the reliability of AI detection much higher. For example, I know someone who developed such a technology, and I know that the AI companies have it available, and so it's very strange to me that they never rolled it out. I wonder if it's just because their business model is much better off if you can never tell if someone's using AI.
CHRISTIAN: This is speculative on my part. I've heard the same thing. I don't have real proof about it, but the natural suspicion is, "Okay, if they have the software, why don't they present it to us?" Well, it would hurt their business. It would hurt usage. Think of all the people who are taking advantage of it for precisely the reason we just talked about, to help them with their grade assignments. Now the professors are able to check them; that usage goes way down. I don't want to say for sure, but it's certainly tempting to speculate that it's in the self-interest of the companies not to release that detection software if they don't want to hurt the usage of their own AI.
SPENCER: My guess is that they're less worried about the student users who are probably not paying them that much money, and more worried about business users who might be using it to write reports or do their LinkedIn posts or whatever. And people calling them out all the time, why are you saying this AI slop, right?
CHRISTIAN: Yeah, good. That makes a lot of sense. I was just focused on the student case, but of course, this generalizes to lots of other situations, and I think throughout all this, there's going to be a threat of dishonesty going on. That's what I'm worried about. That's my focus. Is the dishonesty involved in using this without disclosing that you're using it, but I think you're also right that the financial implications could be more severe if the detection algorithms or software were available in the business context. I think you're right about that.
SPENCER: Yeah, of course, even if they did have this technology, immediately, companies would create systems to try to get around it, but at least that would make it more effortful and increase the risk of getting caught, which I think probably would reduce cheating at least somewhat.
CHRISTIAN: It's certainly true in the student case, and I don't see why it would just be true in the student case. If the risk of getting caught goes up, and this is human nature, if it goes up, the likelihood of the actual cheating behavior goes down. So that makes perfect sense to me from a moral psychological perspective, which is one of the things I study, is moral psychology. My one add-on to that is that unless it was 100% reliable, I would still be handcuffed when it came to trying to hold students accountable. So if it was pretty reliable but had some false positives, negatives, and so forth, then I'm going to be in trouble. If I'm trying to give an F to the students only to have it be a case where they didn't use it, that's no good. So it needs to be 100% reliable, or I'm not going to be able to do anything punitive.
SPENCER: I have a funny, if disturbing story about this, which is that I once had a professor who was a math professor, and he developed a system for detecting cheating in his students. The way it worked is he would secretly mark where everyone was seated, and then when he graded exams, he would grade them next to each other based on where people were seated. He wasn't looking for people making an error or getting things right. He was looking for them making an error that there was absolutely no reason to ever make; it was a very bizarre, unique error. If two people had the same very bizarre, unique error and they were sitting next to each other, he would then look if they had another unique error, and that would be kind of the smoking gun. One time, he caught an entire ring of cheaters. It was something like eight different cheaters working together, and he gave them all an F. He did a presentation to the administration because they were wondering what the deal was, about his method and how he had proved beyond a one in a million chance that they all cheated. The administration actually said, "No, you can't flunk them all." It turned out they were a significant chunk of the football team.
CHRISTIAN: Wow, but he couldn't do it. I'm surprised. Great story. It reminds me of one of the actual empirical ways to assess cheating. It might be worth mentioning this: when we bring people into the lab and we're trying to measure how honest they are, one strategy that's used is to give them a test to incentivize their performance on the test and to make one of the problems unsolvable. They take the test, report how they did, or maybe make more than one problem unsolvable, and they grade themselves. There's no way they could have gotten that if they were being honest.
SPENCER: Money for reporting more problems solved.
CHRISTIAN: That's one way to do it. I think it's actually quite relevant to our discussion. Here's another way to assess empirically cheating behavior in students. You bring them into the lab. In this paradigm, they have 20 problems, 50 cents per correct answer. In the control condition, they complete the test, turn it in, and get graded based on their performance. In the experimental condition, they complete the test, grade themselves, then verbally report how they did, and then we compare the averages.
SPENCER: You can't tell which individual cheated, but you can tell on average how many cheaters there were based on the difference in distribution, compare the distributions.
CHRISTIAN: So in one of the most famous studies, it was seven out of 20 in the control, 14 out of 20 in the experimental condition. So a huge jump in the average performance. That's compatible with a few people in the experimental condition being honest and reporting what they actually got. But as a whole, 14 out of 20 suggests quite a bit of inflation by most of the participants. I really like that as a way to get at what we were getting at. Here's a case where, as we were talking about before, there's no chance of getting detected. So that's taken off the table. People can choose to cheat or not, and in this situation, they are choosing to cheat for monetary incentive. It highlights, in my mind, a desire that most people have to cheat if they think they can get away with it and it's worthwhile. If they're not going to get detected and they think it's worth their cheating, most of us have a desire to do that. That's a more cynical picture I have here, where I think most people are not honest. By and large, they don't have the virtue of honesty. That's one experiment I like to use in that context.
SPENCER: I remember that Dan Ariely did a number of studies around honesty, and his claim about this, I'm curious if you've replicated this finding — that many people cheat a little bit. There are only a few people that will egregiously cheat, but many people will give themselves a couple of right answers. Do you agree with that in your research, or does your data contradict that?
CHRISTIAN: Let me first be honest, which is that I'm a philosopher, I'm not a psychologist, so I'm not actually running the studies myself. I've partnered with psychologists, I've helped design studies, but I'm not the primary person running them. I read all the stuff, of course, I read the published literature, and I am on board with that part of what he's doing. Now, he's a very polarizing figure. There's been some scandal and controversy about his work. But on that particular finding, let's even go back to what we were just talking about, 7 out of 14, that is a significant bump, but it's not all the way to 20. If you're going to cheat, why not just say you got 20 out of 20 and get paid the maximum amount? Almost no one did. In other variations of this kind of setup, experimental condition: six, control condition: three. It doubled in the opportunity to cheat, but it still wasn't 20. Interestingly, there are ways to bring down the amount of cheating, even when you know you won't get caught. This also aligns with some of his views. He had an experiment where you recall as many of the "10 Commandments" as you could and then take the test and grade yourself. That one ended up not replicating, so that's kind of a bust, but other ones can get you to the same place. An honor code, if you have students sign an honor code, like we have at Wake Forest, pledge or honor that I'm not going to lie, cheat, or steal, or whatever the wording is, then they take the test, they grade themselves, they verbally report how they did. Same test, same 20 problems, same 50 cents per correct answer, the average cheating went back down to the control level. So striking two things piggybacking on what you said: yes, cheating happens to a modest extent, but not the maximum extent. Furthermore, there are ways to bring that level of cheating all the way down to the baseline, to the control level. My take on all this is that it points us in the direction of a mixed picture of honesty. We're not virtuously honest people. We're not completely dishonest people either. We're a mixed bag. We don't cheat as much as we could, and there are situations where we could get away with it and we don't cheat at all. My overall picture is something like that.
SPENCER: Ariely, as far as I recall, had the view that people are cheating as much as they can rationalize. If they can rationalize giving themselves a few extra points, thinking, "Oh, I slept badly. I would have gotten those questions right," or "I almost had the right answer," or whatever it was, whereas they can't rationalize just making up all their answers and pretending they got them all right. Do you agree with that?
CHRISTIAN: Yep, I think that's probably a leading view still today in the empirical literature, as far as explaining the data. The heart of it, which also connects nicely to the Honor Code example, is this idea that we care about what other people think of us, whether they think of us as honest, and we also care about how we think of ourselves, and whether we want to be able to think of ourselves as honest people. We can fudge a bit, and we can rationalize a bit, and thereby cheat a bit. But it's hard to take this test and say afterwards, "I got 20 out of 20," when I know I only got seven, and continue to think of myself as an honest person. That's going very far beyond the boundaries of rationalization. A couple of key moving parts in the psychological story are the desire to cheat when you think you can get away with it and not get caught, and reward yourself, but also a desire to think of yourself as an honest person, and a desire for other people to think of you as an honest person. Sometimes those tug against each other, and the latter can keep the former in check, preventing it from going wholeheartedly in the direction of cheating.
SPENCER: What do we know about how often people lie? Obviously, you can ask people how often they lie. It's not clear that people are going to be that accurate. But some people claim that people are lying dozens of times a day. Most people, I think, don't feel that that's actually the case. What does the research tell us about this?
CHRISTIAN: This is also part of the book. Most of the book is on honesty crises, and I go through six case studies, one of which we've already talked about with the AI student cheating. What you're referring to, though, is a chapter that's different from those. It's trying to balance the story out a bit and talk about some more positive and optimistic findings. Now you might think, "How is this going to be optimistic? I mean, isn't the story about lying going to be pretty pessimistic? People lie all the time." But that's not what the research suggests. Early research from the 1980s suggested that people might tell, on average, about two lies a day, which isn't great, but more recent research in the last 15 years has given us a more nuanced story. It didn't just take this whole group and average their amount of lying. It said, "Let's look at this person by person. Let's find out how much this person is lying, this person is lying, this person is lying." To give it a little bit more concrete focus, a 2010 study by Serota looked at self-reported lying in the last 24 hours. So how many lies have you told in the last 24 hours? 60% of participants, and this was, I think, a thousand participants, said they did not tell a single lie in the last 24 hours. Now you might wonder if they're being truthful, but let's just go with that for a moment. 40% said they did tell a lie. But even among the 40%, most of them would say, "It was just one lie or very few lies." It was only 5% who reported that they tell lies frequently. In fact, of all the lies in total, half of them were told by that 5%. So we have a situation where averages cover up or mask individual differences. Most people, it looks like, tell the truth most of the time; there's only a small minority of people who are frequent liars. Now, if that's right, I think that's positive. It gives us grounds for trusting most people most of the time that they are telling the truth to us, and it also reflects well on our character if we're part of that group that tells the truth most of the time to other people. But of course, there is that 5% still out there, and if I'm one of those 5%, then that's not reflecting well on me. But I'm also going to be worried about whether I can pick out who in my social circle or my network might be one of those 5% because I have to be very worried about them. They're going to lie a lot, seemingly without too much guilt or hesitation, and I want to be very cautious when I'm around them.
SPENCER: Yes, those are consistent with my personal life experience. I've met people that lie incessantly, often about bizarre things, where you're like, "Why are you even lying about that?" Yet, most people I know, I know them for years and never catch them in a single lie. It doesn't mean they don't lie, but it means they lie infrequently enough that it's really not obvious. I would also suspect, and I don't know if anyone's ever studied this, but I would suspect that the magnitude of lying is also similarly distributed in the sense that most of the lies that most people tell are pretty small lies most of the time.
CHRISTIAN: Yes.
SPENCER: And that the real liars, those who are telling lies all the time, also tend to tell the largest magnitude of lies.
CHRISTIAN: Yep, I don't know if that has been, as you said, extensively studied, but that's my take on it. There does seem to be some variation depending upon who you lie to and the magnitude of the lies. So compare friends and family versus strangers. The magnitude differs in that dimension. If it's friends and family, that might incentivize some greater magnitude lies, whereas if it's strangers, you might think it's more white lies. What's the reasoning behind that? Well, if there's more at stake, you might want to cover up more so that your friends and family don't find out about it. That might be speaking to your question, a dimension along which you could see a difference in magnitude.
SPENCER: There was someone that I knew a long time ago who told me that they cheated on their girlfriend and that they enjoyed it when she would catch on to them, because then he would have to come up with some crazy lie to convince her that he wasn't cheating on her. There are people like that in the world. I've met people like that. Not everyone has the same relationship to lying, but I think very few people are like that. I think almost everyone doesn't like telling lies. They might do it for some advantage, but they view it as a negative thing.
CHRISTIAN: I agree. In fact, let me just add a quick note. It's gotten to the point now where probably the leading theory here is called Truth Default Theory. The idea is that human beings default into a truthful mindset. When they're talking to others, that's the default. It actually takes mental energy and work to shift into a lying mindset and tell a lie. Truth-telling is more natural to us. Lying is more unnatural, and then kind of pairing up with that is a truth bias. We're biased when we hear other people to think that they're being truthful to us, and we have to shift our perspective and do some work to recast them as someone who might lie to us. I'm just echoing what you're saying.
SPENCER: Yeah, it's really interesting. There's this very general principle sometimes called cognitive miserliness, where humans just don't tend to exert mental effort unless there's some reason to. Now, there are exceptions, or people really love exerting mental effort, but it's sort of like physical effort. Most people, if there's an easier way to do something, they'll do it. They won't expend unnecessary effort most of the time. Maybe they'll go to the gym to exercise, but they won't carry things in the most difficult way. They're carrying things in an easy way. I think it's similar to our minds. We tend to do things in a cognitively easy way. The truth is actually just cognitively easier than lying most of the time.
CHRISTIAN: Yeah, you nail that. I think that's exactly right and very harmonious with this theory. I eat my wife's dessert. I think it tastes great. She asks me, "How does it taste?" I could do the work of lying for some reason, but why bother? It's just easy and natural and spontaneous to tell the truth. Another example: someone asks me for directions. I know where the building is. I just tell them naturally, "Yeah, just take a left down the road here, and you'll come to that building." If I'm going to mislead them, I need a reason. How is it going to help me? Then I have to do some extra work to figure out what a false plan is going to be, send them in a false direction, and then I also have to worry that they might find out that I lied to them and come back and fuss at me later on. That's a lot of extra expense on my part. Unless the benefit to me is significant enough, why not just default to telling the truth?
SPENCER: I have a past guest named Blake Eastman, who's done really interesting research analyzing people's facial expressions and emotions and things like that. One thing he did is he spent a lot of time studying videos of people playing poker to try to see what tells are really there. There are so many theories about this. One of the most robust findings he had was not something about emotions or trickery. It was just about the cognitive demands. If people had a more cognitively demanding hand that required more thinking, they tended to look at their hand more and keep checking it, which is funny because it literally took more brain power to figure out what their hand meant. If they had an easy decision to make, they would just glance at their cards and then be done with it. I thought that was fascinating how cognitive demands can tell more.
CHRISTIAN: That's interesting. I'd never actually heard that. I do know, and I'm going to say this, but then also note this is getting out of my kind of wheelhouse of knowledge. I do know that, in general, we're quite poor at detecting lying. The empirical research says kudos to the person who found this one. In ordinary social interaction, we tend to have an inflated view of our ability to read other people and detect whether they're lying. As a matter of fact, we're barely better than 50/50.
SPENCER: I've run this myself in my living room. One time, I did a social experiment where everyone had to pick a question, and it was going to be asked of them. They knew what the question was, and they flipped a coin, and nobody else got to see it. If it was heads, they had to lie. If it was tails, they had to tell the truth on the spot. Everyone in the room tried to figure out if they were lying. I think it was 54% accuracy, just right in line with the standard research. They were barely better than chance. The bizarre thing is, people believe they can do it. That has not been corrected.
CHRISTIAN: That's reliably, decade after decade, people still think this, even though the research is starting to get out there, even though we're trying to update people's beliefs here, it's not sinking in yet. I don't know the reason why that's a kind of recalcitrant belief that people have an overconfidence bias here, but it's not giving up. They're not giving it up yet.
SPENCER: Yeah, I think usually when people actually detect lying, it's more detecting inconsistency, or it's detecting it based on information, like the person said something that contradicts the information you have, or they're acting inconsistent with something else that they previously did, and you know that there's some kind of deception happening.
CHRISTIAN: That's right. But what we tend to think of when we think of lie detection is face reading or body reading. I can read off of your knee bouncing, or you're twitching, or your eyes are twitching, or something's going on, and that's a sign to me that you're lying. Therefore, from your body language, I can infer that. That kind of reading is notoriously unreliable, and that's what I was referring to. We just can't seem to get past our overconfidence in being able to do that.
SPENCER: This reminds me of one of the weirdest lies I've ever told in my life, which was so many years ago. I was a teenager, and I found out that my girlfriend at the time had cheated on me. The way I found out, I didn't want to throw the person under the bus who had given me the information, so the next time I saw my girlfriend, I pretended I could read on her body language that something was wrong. Eventually, I got her to admit that she cheated, but she wasn't acting weird.
CHRISTIAN: You're piggybacking on that idea that we can detect this, even though these people believe it. Wow, very intriguing. I've never heard that either. I'm sorry also about what it involved.
SPENCER: Thank you. I was a teenager. Hopefully, I've gotten over it. So here's a very basic question, but as a philosopher, I'm sure you have a more complex answer. What does it even mean to be honest?
CHRISTIAN: Sure. That's actually kind of where I come from originally. When I'm thinking about all these issues, I started years ago wondering about honesty, not how honest people are or if there is a crisis, but just the kind of definitional question. Philosophers often do this. We have to define our terms before we can make any progress. So I'll say a few preliminary things, and then please follow up where you want to go with this, because there's so much to say about it. I'm thinking of honesty as a virtue. It's a part of our character; it's a character trait. There are also particular honest actions. An honest person performs particular honest actions. But there's more to being honest than just performing particular honest actions. You've got to do them in a variety of different situations. You can't just do them in the courtroom or at the party. You've got to do it steadily over time. It can't just be a Tuesday thing or a once-a-week thing or a once-a-month thing. It's got to be stable over time. It's not also a thing that pertains only to lying, although that's where we go right away, as much of our discussion has been about lying. Honesty pertains, of course, to lying, but to so much more. It pertains to stealing, cheating, bullshitting, hypocrisy, self-deception, fraud, lots of things. The things being morally bad, moral failures fall under the purview or the scope of honesty. Honesty prevents them. But the last thing, and I'll stop for the moment, is that even that's not all there is to it. If you have an honest character, you perform these actions in a variety of situations deeply over time, and that pertains to all these different areas of morality. But it matters why you perform them too. We're talking about a virtue here. Virtue is behavioral, but it's also motivational. The why matters, not just the what. If I tell my boss the truth, but I'm only doing so to avoid punishment at work or to advance my promotion in the company, that's purely self-serving motivation. That doesn't count as virtuous, honest motivation. If I don't steal only because I'm afraid of getting caught and punished and maybe losing my job or whatever reputational damage, again, that's self-serving. So motivation here can't be self-interested. If it's going to be virtuous, it's going to have to be something else. We could talk about what it might be, but motivation matters. Behavior matters, cross-situational consistency matters, stability over time matters. There's a lot of moving parts when it comes to being an honest person.
SPENCER: Some people have certain philosophies or codes they live by that tell them they have to be honest. But if you don't have one of those, it's interesting to think about, "Well, why be honest?" I can think of three main reasons, and I'm curious to hear your thoughts on it. One of them is that you might just value honesty. I think I have an intrinsic value of honesty. I think that's just something I fundamentally care about on a deep level, but someone might not have that. A second reason is that I think it's a really good life principle, in the sense that by being honest and sticking with it, people will eventually learn you're honest, and people learn to trust you. It might lead to better outcomes. You might think, "What if I just deceive people from time to time when it's to my advantage?" That might actually lead to worse outcomes than just sticking to this strong principle and communicating that principle. If you really stick to it, then you can communicate with it. It's sort of a costly signal, in a way, by communicating it. The third reason is that you might want to universalize it. You might say, it's something that I want everyone to do. If I want to commit to a principle that I believe everyone should do, maybe it's for the common good, and that can get other people to do it, or we can all agree to it as rational agents. What do you think of that?
CHRISTIAN: Yeah. First of all, you've hit the major ethical theories, knowingly or unknowingly. You hit Aristotle, you hit Kant, and you hit utilitarianism. You gave three different justification answers that will appeal to the three major theories in ethics today. Someone like Aristotle, when we think about virtue ethics, is going to say virtues in general are intrinsically valuable. They're good in and of themselves, independent of the consequences. I agree with that. Utilitarians are going to focus on the consequences or the outcomes. If you're going to make a case for honesty being an important policy, then it needs to have good outcomes. Good outcomes for whom? Utilitarians are going to focus on good outcomes overall, for everyone, although I think you were focusing a little bit more on good outcomes for yourself. I think it's going to be both. It's going to be both good outcomes for yourself, which will make your life better by and large, and it's also going to make society better. Someone like Kant is going to focus on universalization. What if everyone did this? Imagine a world in which this was the default policy. Can we conceive of that world? Would we want to live in that world? That's very much Kant's way of thinking. I agree with that too. Two qualifications: One is, I wouldn't want to push it too far so that there are no exceptions, because I think there are going to be some exceptional cases where we should not be honest. I don't want to go all the way to what Kant did and Augustine, who said that lying is always wrong. The second qualification is we have to think harder about that deceitful person. If you just run the argument at the level of benefiting myself, the standard pushback against that is, couldn't I benefit myself even more if I was publicly honest but privately dishonest. If I knew I could get away with it and not get caught, kind of like earlier with our test example, if I could get that extra money by cheating on the test and not get caught, isn't it actually more beneficial for me to do it in those cases? Something dishonest while publicly maintaining my reputation for honesty gives me the best of both worlds: reputational benefits and private self-interested benefits. Those are two qualifications I would give; otherwise, I agree with you entirely.
SPENCER: Yeah, that's a really good breakdown in terms of that question of, should we always be honest all the time? It seems clear that if you're viewing it as a value, there can be contradictory values that sometimes come up. The classic Kant example: if a murderer is coming to look for someone who's hiding in your house, you could lie to them to protect that person's life. That's a bigger value, I think, to most people than that one lie. A cop might disagree with that. Did he actually disagree, or is he being misrepresented?
CHRISTIAN: That's his example. He took the position that's counterintuitive to most people. He had the ax murderer looking for the victim, and he thought you were morally obligated, at least in that essay that he wrote, to tell the truth. If you knew where the would-be victim was, you were morally obligated to tell the truth to the ax murderer. I tell that to my students, and none of them buy that. The updated version of this is the Nazi at the door example. It's a little bit more relatable than the ax murderer one. You're hiding a Jewish family in the basement. The Nazis are going door to door. They come to your door. It's a routine question: Do you know where any Jews are? You do know because you're hiding them in the basement. If you tell the truth, they're going to come in and take the Jewish family away, probably take you away too. If you lie, you know that they're just going to keep going to the next door. Overwhelmingly, my students will say, "The obligatory thing to do is to lie to the Nazi." Back to your question: Kant, in that essay on the supposed right to lie, said the obligatory thing to do is to tell the truth. Interesting. A wrinkle on that example: I don't think that even follows from Kant's own ethical theory. I didn't have that original idea; I'm trying to be honest here, so I don't want to plagiarize it. This is from the philosopher Christine Korsgaard at Harvard. She said, look, if you use Kant's own theory correctly, his own theory would have allowed you to lie. Here's why: although it's true that you can't universalize lying in general, you can universalize lying in a specific situation to protect innocent lives. That does not violate his categorical imperative. That could be something that people can conceive of a world like that, and would want to live in a world like that. It passes his tests. It makes it through his procedure. Therefore, on his own terms, he should have been willing to allow lying to the Nazi at the door, in his example, lying to the ax murderer at the door. So interesting. Maybe Kant, I mean, it has the unfortunate implication that Kant was confused about his own theory. That's not such a good interpretation, but maybe Kant's own theory would have allowed him to lie.
SPENCER: That's really interesting. I always took that as a problem with Kant's theory. If you say, in order to have something be an ethical action, you have to be able to universalize it and say that it should always be done by everyone everywhere. What is the action? How do we define the action? There's an infinite number of ways to define it at different levels of granularity. You could think about punching someone as a series of micro movements of your hand. Is that what you're talking about? Or you could think of this abstract idea of hitting someone with a blunt object or whatever, right? It seems like a fundamental ambiguity in the theory.
CHRISTIAN: Yeah, you're right. For the people in the audience who know the jargon, this is called the problem of relevant maxims. How do you formulate in the first place the maxim or the intended action you're thinking about performing? You haven't done it yet. You're just contemplating performing this. How do you formulate that maxim so that it's formulated in the right way to be used in Kant's procedure, so that the procedure works properly? If you formulate the maxim too generally or vaguely, you're going to get bad results. If you formulate the maxim too granularly, you're going to get bad results. It's a garbage in, garbage out problem. You put garbage into the procedure, you get garbage out. This is, I think, unquestionably, the biggest problem for Kant's Universal Law formulation. If you really want to get into it, there are some solutions that have been proposed. This is fair to say that no one has arrived at a solution that has commanded a lot of agreement or that people are buying into. It's still a vexed matter amongst followers of Kant.
SPENCER: It's funny. I know some virtue ethicists and I know some utilitarians, but I don't think I know anyone who tries to live by the categorical imperative, at least not that I'm aware of.
CHRISTIAN: I could introduce you to some Kantians, if you like. There are some today, some very prominent people in my world of philosophy who are Kantians. I don't know of anyone who holds everything Kant held or is willing to go to bat for all of the claims he makes in the Groundwork for the Metaphysic of Morals. But there are people today who say, "Look, he was on to something. We can preserve some big ideas from him. We may have to adjust them or update them or modify them, but it's still worth being a Kantian today just as almost no one is following Aristotle to the letter, especially when it comes to Aristotle on slavery, Aristotle on women, or so forth. But we can be Aristotelians. We can kind of update and still take a lot of what he had to say and think it's relevant today.
SPENCER: Yeah, my favorite thing from his moral theory is the idea, and I don't even know if he exactly meant this, but maybe he did, which is that some things that you could say are ethical could actually be self-undermining. Because if you were to claim, "Oh, this should be a universal rule, it would actually defeat the purpose of the rule itself." I think that's an interesting perspective on a way that it can help us rule out some moral ideas.
CHRISTIAN: This is still within the realm, so a little bit of context for listeners who aren't up on their Kant. Kant actually gave different theories, or at least what he called formulations of the categorical imperative, and we're still working with the first one here, which is called the Universal Law. We haven't even touched the second one, which involves treating others as ends, never merely as means. So it's the humanity formulation. Yes, on the Universal Law formulation, he did have this idea. If you make a promise with no intention of actually keeping the promise, and you universalize that. So imagine a world in which everyone was making promises that they had no intention of keeping. Well, that would quickly undermine the very practice of making promises, because everyone would figure that out, the promises weren't being kept, and so no one would take promises seriously anymore, and it would undermine the very act of trying to make a promise. I think that's quite clever. It works nicely with something like promising and lying. There are questions, can you run that for all of morality? I'm not sure. And also, how again do we formulate these actions carefully so that we don't get the garbage in, garbage out problems? It's a clever idea. It's just hard to run it and have it be a complete moral theory that's going to work in all cases.
SPENCER: This is the first time I've ever been glad that I actually read one of Kant's books. I could actually pull it up in this conversation. Well before that, I just kind of suffered through the incredibly difficult translation of it.
CHRISTIAN: Oh, impressive. Very impressive. The first time I think I've ever been asked to explore some of the details of Kant. So kudos to you.
SPENCER: So what is your favorite definition of a lie?
CHRISTIAN: I don't think I have one, so I'm going to be honest in trying to dodge the question. Here's a standard definition: it's verbally saying something which you believe to be false, with the intention of deceiving the other person. It's got two parts to it. You're saying something which you believe to be false; it actually has to be false, but you just have to believe it's false. There's also an intentional element to it as well. "Why are you doing it? With the intention of trying to deceive the other person."
SPENCER: So let's break that down for a second. If you say something you believe to be true, clearly, that's not a lie. If you don't have the intention to deceive, let's say you just told them a false thing, but you knew they wouldn't believe you. Would that not be a lie?
CHRISTIAN: That would not be a lie? Or, you're an actor, and you say, "I'm Julius Caesar." You're saying something false; you're not really Julius Caesar, but you have no intention of deceiving the audience. The audience knows that. The audience knows full well that you're an actor and that you're not Julius, so you're not trying to pull a fast one on them. I probably presented that too quickly, but you're quite right: two components: it has to be believed false with the accompanying intention to try to deceive the audience. Now, that's not original to me. That's the kind of canonical or classical or traditional definition. This goes back to Augustine. Many people have used it over the years. The problem is that in contemporary literature, it's widely rejected. There are thought to be counterexamples to this. Philosophers tend to get really caught up on whether a definition holds in every single case, and if it doesn't, then it fails. If they can just come up with one exception, they have to scrap it and come up with a new definition. Lots of philosophers think that definition works in many cases, but it doesn't work in every case. I'll tell you why. I'll give you an example in a second. My view on this is, for my purposes, I don't have to worry about it, because most of the examples I'm focused on in my Honesty Crisis that I'm pointing to are going to be captured by the standard definition; that's going to be good enough. But here's just for fun. Here's one counterexample that's used: bald-faced lying, so cases where you do say something that you believe to be false, but in a bald-faced, very transparent way, because you know that your audience knows that you're lying, and your audience knows that you know that they know that you are lying.
SPENCER: They know that you're lying; it sort of defeats the lie.
CHRISTIAN: So it's not going to pull one over on them, but nevertheless, you are still lying anyway. For example, under oath, you might lie anyway, so that goes on record, but everyone in the room knows that you're lying. You're not going to intend to deceive anyone. You're not going to pull a fast one on anyone. Nevertheless, it's still a lie. I think that's an interesting counterexample. It may cause trouble for the original definition. But I don't really spend time trying to come up with a game of complicated definitions that can block every single counterexample.
SPENCER: Something I think happens quite a bit in real life that maybe doesn't meet that definition is paltering, which is actually one of my favorite obscure words, where people make a series of true statements that cause someone to have a false belief. I think we actually see this quite a lot where, let's say an influencer will say a series of true things, but it will lead someone to a completely false conclusion, like, for example, that this health product they're selling is a really good idea to buy.
CHRISTIAN: Yeah, nice SAT word there. Way to get that in there. I agree with you, but I will make one tweak, which is, I don't think that was ever meant to be considered a lie, or if anyone was thinking of that as a lie, they're making a mistake. I think it's just a different category of dishonesty. I call it misleading. Paltering works too, but it's not ordinary language, so it's harder for people to latch on to. In the case of misleading, you are telling the truth, and you know you're telling the truth, but you're presenting it, you're framing it in such a way that — here comes the intention again — you hope that your audience will draw a false implication from what you're saying. An example I like to use here goes back to infidelity. So when the person comes home from a night at the bar, they tell their significant other, "Hey, I'm home." The significant other says, "Wow, it's really late. Where have you been?" And the person says, "Well, I was at the bar." They didn't say just at the bar; they said they were at the bar. They're telling you the truth. They're hoping that you will draw the implication that they were just at the bar. But they never said that. They just said they were at the bar. They left out that after the bar, they went home with this person they met at the bar and did some things, and then they came home. So it's a technically true statement that they were at the bar. They're hoping you draw the implication they were just at the bar so that they don't have to lie, by omission or commission, about what they were really doing. So yep, I'm with you there.
SPENCER: Do you ever worry philosophers are playing word games where they're spending a lot of time, "Oh, aren't we just dealing with definitions? We can define them however we want. It's just kind of a pragmatic thing." You can either study empirically how people use the definition, that's one question. Or you could say, "Let's just define it however it's convenient for this conversation. Let's agree on it." As we agree on it, we know what we're talking about; that's fine.
CHRISTIAN: Yeah, we philosophers spend a lot of time doing that. If they only did that, I would find it problematic. I think I would not be happy with that, but it has some value. As long as the philosopher is upfront about what they're doing with the definitional work. So here, as you said, if their project is to try and capture ordinary usage, and they're trying to clarify for people what they really mean by lying, but not go against that, or what they really mean by misleading or honesty, or something like that, they're therefore held accountable to ordinary usage. Intuitions come into play, and if their definition is counterintuitive, that's a problem for the definition. But as long as they're transparent about that, I think that's a valuable project. They can be engaging in a different project, as you said. They could be stipulating, "This is what I'm going to mean for the sake of my argument." But as long as they're transparent about that, that's fine too, if it has a good payoff. If you tell me, "This is what you're going to mean by honesty," and that's not ordinary usage, but you told me it's not ordinary usage, and you think it has some payoff, we're going to get somewhere productive by doing that. Show me the goods if you do; take me somewhere productive, and I'm satisfied at the end of it. It was worth my time to go with you with the stipulative definition. What would be unfortunate would be if you don't clarify what you're doing with your definition, why you're using it, and then I'm always wondering where we're going and what the standards are to assess your project. That's frustrating.
SPENCER: Yeah, there's this delicate thing where, on the one hand, you might be doing linguistics, and there's nothing wrong with doing linguistics. It's just not philosophy by most definitions. I don't mind philosophers doing linguistics; that's fine. On the other hand, you might be doing psychology, studying how people think about this thing, which is also fine. I don't mind philosophers doing that either. And on the third hand, you might be just defining words, which is also fine. All those things are fine. I don't know if any of those are philosophy. I don't mind philosophers doing those things, as long as we're clear on what they're doing.
CHRISTIAN: Yeah, I don't either. I mean, there are different traditions of philosophy, and different traditions emphasize different things. There's an analytic tradition in philosophy, there's a continental tradition in philosophy. At different times and places, philosophers are doing different things. In the analytic tradition, which is mainly in America and the UK in the 20th century, a lot of their focus was on conceptual analysis. They were trying to give a very rigorous conceptual definition of some concept. You wouldn't believe the number of papers that were written trying to give a conceptual definition of knowledge. Does knowledge require belief? Does it require justification? Does it require truth? Is that enough? No. There are these counterexamples. We need to have a fourth condition and blah, blah, blah, blah, blah. Dissertations, books, entire careers were built on the project of trying to do that conceptual work on one concept. That's been very big at times in the history of philosophy. I think it has some value. It's just not satisfying for me. If that was the only thing I did, I would look back on my career and say, "Boy, I wish I had done more than that."
SPENCER: I can relate to that. Do you personally have a sort of set of principles around when you're willing to lie?
CHRISTIAN: Good question. No, I don't, and part of that is because I don't have — well, now you're making me squirm a bit here — a set of principles about when I'm going to do anything, morally speaking. It depends on what the principles mean here. I'm not a philosopher who thinks that there is one moral principle that governs all of our behavior, or a handful of exceptionalist moral principles that govern our behavior. So that's going to apply in the case of honesty. Now, what am I going to do? I couldn't say at the door, yes, I think it's okay to lie there. Why? What's my principle? I'm not going to be able to readily come up with a worked out, carefully defined principle. What I'm going to do instead is often rely on my weighing up of the particular morally relevant features of that situation. I'm going to take into account innocent lives that might be lost. I'm going to take into account the effects of telling a lie to the Nazi. I'm going to take into account the likelihood that the Nazi would find out that it was a lie and have future implications. I'm going to take into account all these different considerations, many of which are morally relevant, and then I'm going to weigh them up, but not in any kind of formulaic, systematic way. It's probably somewhat subconscious, somewhat conscious, hard to mathematically put into an algorithm, and then see what kind of emerges from that. What emerges from that is that considerations of compassion towards the Jewish family outweigh considerations of honesty towards the Nazi. As a result of that, I think all things considered, the virtuous thing to do in this situation is to tell a lie to the Nazi. But then we go to another situation, and I have to start over again, and I do the same kind of situational analysis of the relevant factors, because there might be one factor that's different in this new situation, and maybe that factor is stronger or weaker than it was in the first situation. So I have to start over and do that weighing up again. That's how I tend to work. Sorry if that's not super helpful for other people or gives us a nice shopping list or checklist to go down and say, "Okay, check, check, check, check, check, okay, I can lie to someone," but I think that's not the way morality actually works. It's simplistic the way we think of morality.
SPENCER: Yeah, it's interesting. I think about it a little bit differently. When I use the word principle, I'm not talking here about a fundamental universal principle or anything like that. I'm just talking about a heuristic you try to live by and aspire to live by. I have a principle that I try not to lie. I take it seriously. I try to avoid lying when it's not necessary. Of course, I'm not perfect at it, and of course, if there was a really good reason to lie, like something incredibly valuable is going to be destroyed if you didn't lie, then it would overwhelm the principle. But I find that a lot of lies that people tell are actually just trying to save face in that situation, or it's just an easier thing to say than not to say, or it's more comfortable, less socially awkward, or whatever. I do try to use this principle and avoid lying. In those cases, I do have an exception, which is that I will tell white lies when I'm reasonably confident that the person would prefer it. Of course, that can be an issue where you have to make some guesses about what the person would prefer, and you have to be careful about that, because you can rationalize there. For example, let's say my friend wrote a play, and afterwards, my friend runs up to me and says, "What did you think?" It's really obvious that they want me to really like it, but I actually didn't like it. I'm not going to say I didn't like it. Realistically, I'm going to try to say something true that I did like, but then if they press me more, I'll probably say, "Oh, I enjoyed the show," or something like that. Because I would think in that case, they probably prefer me to do that. But yeah, so what's your reaction?
CHRISTIAN: Yeah, that's good. That helps me a lot to kind of hone in on what I was thinking. There's a philosopher named W.D. Ross, who had a system of morality called moral pluralism, and he had a list of what he called prima facie duties. So let's not worry about the jargon, but just a list of what you called heuristics, kind of go-to default principles like, don't tell a lie, don't cheat, don't steal, help others in need. They don't work in every case; they can come into conflict, so we have to weigh them up when they come into conflict. The weighing is case by case. It's a case-by-case matter. I like that approach a lot; I think that's very similar to what you were talking about, not just with respect to lying, but with respect to morality across the board. Now, in the white lie case, I also have a similar use of your language heuristic of do not lie, and I'm also similarly hesitant to engage in white lies, more so than many other people are. I think there's a cultural acceptance of white lies more so than I would be comfortable with myself. I think there are all kinds of reasons not to do it, even though it can be easy to do and seem like the path of least resistance and a path of least emotional tension. Reasons stemming from the need to remember that you told the lie, so that the next time the situation arises, you have to remember that you told the lie the previous time. You have to remember not to tell third parties the truth, who can then out you and betray you to the person you lied to. You have to make sure that the person never finds out about the lie in the future, because that could undermine the trust you have in the relationship. You have to think about what the damaging consequences could be if the person makes the dessert again or wears the outfit again because you told them it looked nice, and then it actually reflects badly on them in a professional context. All kinds of these reasons make me hesitant, and empirical research interestingly backs up the thought that people overestimate the possible negative consequences of telling the truth. They think if I tell the truth, it's going to be bad. Often it's not nearly as bad as you think it would be. Having said all that, I like your exception. I think I'm not going to ban white lies completely. I'm going to make some exceptions too. One exception I might have is cases of extreme emotional fragility where it's just not the right time to tell the truth. Maybe the truth could be told later, but this time it would perhaps send a person into a tailspin, or just fill in the blank. For the sake of protecting, but rarely perhaps, in those cases, I would be hesitant about telling the truth.
SPENCER: Yeah, I think you make a lot of good points, and I do think white lies are a slippery slope, and they're easy to justify when they're not actually justified. But I do think there are some times when people really want us to lie to them, which is a weird situation to be in, but I do think it sometimes crops up.
CHRISTIAN: That's a delicate one. I didn't talk about that one specifically. I'm sympathetic to that, although I would want to, as I've just been emphasizing for the last 10 minutes or so, treat it case by case. There are some times in which the right thing to do is to tell the person the truth, even if they prefer to be lied to. That's true; they just need to confront the truth to break them out of some kind of rationalization or delusion or ignorance that they're working with. Even though it's going to be painful, they're not going to like it, and you're violating their expressed preference. I think sometimes you have to tell the truth.
SPENCER: Let's say someone really wants you to lie. They would want you to lie to them and pretend you love them if you didn't. They might genuinely feel that way, but you might be doing them a huge disservice by pretending you love them.
CHRISTIAN: Yep. That's right. And the long-term consequence of that for the relationship could be very damaging. So, a nice example that works.
SPENCER: Another kind of funny case is that there are some things that are literally false, but there's a socially accepted meaning that isn't taking them literally. So an example is in this app, Particle, where you register people's events, say whether you're going or not. The only option to say you're not going is "can't go." I've heard people complain about this, who are very literal-minded, saying, "I hate that the only option is 'can't go,' because sometimes I can go, but I don't want to."
CHRISTIAN: Right, right. Interesting.
SPENCER: Yeah. So my view is that it's a socially accepted meaning; we kind of all know what it does, what it's really trying to say.
CHRISTIAN: So my mind went to cases where there's kind of background acceptance that we're not to take the person literally. One that I see a lot on campuses, for example, is when a student walks up to another student and they're kind of passing each other, and one says, "How's it going?" and the other one says, "Fine, how are you?" or "It's going great," or "It's going good." Well, literally speaking, sometimes that's false, but there's a background acceptance that in this context, we're not actually engaging in truthful speech. We're just doing something polite to kind of grease the wheels of this very passing social interaction.
SPENCER: Social ritual, sort of?
CHRISTIAN: Yeah, that kind of thing. And so it's no, I would say in those cases, it's not a failure of honesty. If the person is actually really having a bad day, but they say, "Fine. You know, how are you doing? Fine." And they just keep going on their way. I don't think that's a failure of honesty, because I think the norms of honesty are suspended in this context, just like they're suspended in poker games, they're suspended in drama productions, maybe in your case as well. We're not really being literal here. We're not really asking you to tell the truth. And so that's where my mind went, at least. Interestingly, the norms can apply, but they can also be suspended, and that can be common knowledge, and therefore people don't get dinged on their dishonesty.
SPENCER: Yeah, it's funny with that social ritual of saying, "How are you doing?" In the US, it's just very normal to say, "Oh, I'm doing great. I'm doing fine." Other countries sometimes make fun of people in the US for being so upbeat with our answers, because that's just a social norm. But I get an increasingly weird feeling about it the closer the relationship I have with the person. If it's just an acquaintance, I don't mind saying, "Oh, I'm doing fine," even if I'm doing badly. But if it's someone I'm really close to, I do actually feel weird saying, "I'm doing fine." I think it's because I take their meaning as more literal about how you are doing?
CHRISTIAN: Yeah. So that's a nice caveat to what I said. It's context-dependent and participant-dependent. I'm entirely in agreement with you. If I came home and I just kind of gave that cursory answer to my wife, and she discovers later on that I actually had a terrible day, she could feel betrayed, or that trust was eroded here because I wasn't forthcoming about how my day actually was. So yeah, it's participant-dependent in that way, and that makes it even more important to go case by case, rather than drawing just blanket generalizations.
SPENCER: There's an interesting case that comes up sometimes where someone's put on the spot, where it's almost like someone's trying to get information out of them. Let me give you an example. Let's say someone goes up to another person and says, "Hey, do you have any unusual kinks? Sexual kinks?" The person has a couple. Let's say they do have an unusual kink, but they don't want to share it. They have a few options. They could be honest, but they don't want to share it. It seems like they're put in an unfair position. They could lie, but lying feels bad. They could hesitate and say, "I'm not gonna answer that," but that might be taken as they do have strange kinks. I'm actually more okay with them lying in that case, because they've sort of been coerced into the situation. What do you think?
CHRISTIAN: Good. Yeah. So this is something I talk about, not in this book, The Honesty Crisis, but in my previous work. I think we want to step back as philosophers first. Well, at least I want to step back as a philosopher because that's the way my mind works. And think about any virtue, there are ways of failing at that virtue. For honesty, the most common way to fail is to be dishonest. That's the one we've been talking about mostly today and the book is about. Aristotle had this idea that there are two ways of failing. You can go in one direction, you have a deficiency. You can go in another direction, you can have an excess. In the case of honesty, someone who overshares is failing as well because it's inappropriate in certain contexts to share that kind of information with a stranger. Now your example is a little bit different from that because your example involves someone else pressing them for that information, as opposed to them spontaneously sharing. I have an example that I use, which is you're riding an elevator with a stranger, and the stranger is making conversation. The stranger says, "How's your day going?" And then suddenly you start sharing about your bathroom habits and your finances and how your bank account is looking these days. That's oversharing; that's a failure. Everything you say is literally true. You're not lying, but you're oversharing information that is not appropriate for that situation. Now, what if that stranger was actually pressing you, saying, "Hey, I don't know you at all, but I'm really curious. What do you like to do in the bedroom that's a little bit unusual?" How do you respond? I think the appropriate thing to say there would be, "None of your business. Maybe I have some things. Maybe I don't, but it's just not appropriate for you to be asking that question, given our relationship and how little we know each other." That doesn't have to signal that you are engaging in that behavior or not. It just keeps it a secret because it's out of line for that person to be asking.
SPENCER: I think the hardest case is when refusing to answer is actually interpreted as an answer.
CHRISTIAN: Yeah, that's right. So that would be harder. You have options. You can be silent, which is kind of refusing an answer, but is also a signal. Maybe that answer is yes. You can lie. You can engage in that paltering. You can give a misleading answer. You can give a bullshit answer. There are different things you can do. Then there's a further question: what you should do. Should you, morally speaking, be permitted to lie to cover up the truth? Here, I'm going to stick to my policy. Maybe it depends on what we're talking about. If I'm a government official and I have top secret knowledge about when the attack is going to happen, or the nuclear codes, or something like that, and this person is really pushing me on it, I think it would be morally justifiable for me to lie. Or maybe they don't even know whether I have the codes or not. They're trying to figure out whether I have the codes. But if they do figure out that I have the codes, they're going to kidnap me or whatever and extract the information from me there. I think we can make the case that it's morally permissible to lie to protect that invaluable information because it could harm me.
SPENCER: This is how serious it is, essentially.
CHRISTIAN: Yeah, but if it's something like bathroom habits or whatnot, maybe it would be different sexual habits. I guess I haven't thought about that. I've only thought about the case where you're voluntarily too forthcoming. I haven't thought much about the case where others are pressing you to be more forthcoming than you want to be. But I think at least in some of those cases, it would be morally justifiable to lie. I need to think more about it, so I will not try and BS my way out of that. I'll honestly say I haven't.
SPENCER: I thought about it a ton. I appreciate that. I know we have to wrap up in a minute. But what would you say to the listener who's like, "Yeah, I just don't really care that much about being honest." How would you want to conclude this conversation in terms of the case for honesty?
CHRISTIAN: Yeah, it depends on who the listener is. Some people are going to be so entrenched in their view, like they're so entrenched in being dishonest, that there's probably nothing I can say to convince certain people. This just works in philosophy and ethics. In particular, it's very hard to convince certain people, amoralists, we sometimes call them, or immoralists, to come over to the side of the right and the good and the virtuous. I would try and speak more to people who are on the fence or just don't know or haven't thought about it much. I think they can be more open-minded and more receptive to some arguments. I would kind of go through some of the arguments, but I'll do it briefly here that we looked at earlier. I would highlight that honesty in and of itself is just a good thing. I would highlight that it is something that is beneficial for society. Do you want to live in a society where there's rampant cheating, stealing, and lying, or do you want to live in a society where there's widespread honesty? Most people are going to go with the second. It's good in that respect. Isn't that an important respect? Why you should care about honesty. I would go with, are they religious or not? If they're religious, almost every religion is going to value honesty. You can appeal to particular tenets in their religion to support the case for honesty. Lastly, in the interest of time, I'll end with this one: it's even in your self-interest. This is often, for many people, the most impactful. If you can show that it's actually beneficial for them, that it enhances their well-being in some way, that's going to have some teeth. Make the case that over their lifespan, even if on particular occasions it might seem like it's better to be dishonest, over the course of their lifespan, they're going to live a better life, a more fulfilling life, a life of flourishing — or eudaimonia, as Aristotle would call it — if they are honest. I hope that would be the kind of argument that seals the deal. All these arguments collectively, I think, are good, but that would be the one to end with. Look, it's actually in your self-interest.
SPENCER: Christian, thank you so much for coming on the Clearer Thinking Podcast.
CHRISTIAN: It was a great conversation. I loved it, and thank you for having me as a guest.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: