CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 297: Ambitious goals for reducing animal suffering (with Jeff Sebo)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

January 22, 2026

The Clearer Thinking Podcast listener survey is here!

If you've ever listened to the Clearer Thinking podcast before, we'd love it if you'd take our listener survey so we can learn about your experience and improve the podcast based on your feedback.

Give feedback to help us improve the Clearer Thinking podcast!

What would a global ban on industrial animal agriculture by 2050 actually achieve across welfare public health and climate? Can a phased transition built on price taste and convenience overcome identity, culture, and religion in shaping diets? Which mix of informational, financial, and regulatory policies shifts behavior without backlash? Where is the line between small humane farms that persist and large systems that must end? How do we align consumer values with daily choices when cognitive dissonance makes the topic uncomfortable? When does a little guilt motivate change and when does it harden resistance? What evidence would show that plant-based and cultivated options have reached parity that tips the market? How do we protect farmers and workers while shrinking harmful production at scale? What are the realistic tipping points for social norms around meat in different communities? If the expected suffering avoided each year dwarfs human history how should that reshape priorities?

Jeff Sebo is the Director of the Center for Environmental and Animal Protection, Director of the Center for Mind, Ethics, and Policy, and Co-Director of the Wild Animal Welfare Program at New York University. He is also a Faculty Fellow at the Guarini Center on Environmental, Energy & Land Use Law at the NYU School of Law and an Advisor at the Animals in Context series at NYU Press. His research focuses on moral philosophy, legal philosophy, and philosophy of mind; animal minds, ethics, and policy; AI minds, ethics, and policy; and global health and climate ethics and policy. His books The Moral Circle and Saving Animals, Saving Ourselves are out now.

Links:

SPENCER: Jeff, welcome back to the Clearer Thinking Podcast.

JEFF: Thank you for having me back, Spencer. Great to see you.

SPENCER: So you have a bit of a radical idea, which is a proposed global ban on industrial animal agriculture by 2050, that's a pretty shocking concept. You want to tell us what is that?

JEFF: Yes, this comes from a law review paper that I published with a couple of collaborators, Emma Dietz and Toni Sims. And basically, we propose that the countries of the world work together towards a global, international ban on industrial animal agriculture in both its intensive form, like factory farming and large-scale extensive form. Picture large cattle farming operations for animal welfare and public health and environmental reasons. And we argue that this is both necessary and possible to try to achieve. And we suggest that countries aim for 2050 because that would align a global phase down of industrial animal agriculture and a just transition to a plant-based food system with relevant other climate and biodiversity and deforestation targets that are, of course, linked to this project.

SPENCER: Before we get into things like, "Is this feasible? Is it acceptable? What are the benefits?" Suppose that we did this like, what would we achieve?

JEFF: Yes, well, probably many people in your audience are familiar with some of the harms, if not all of the harms of factory farming and industrial animal agriculture. But very briefly, these are some of the worst industries that our species has ever built, and they really do cause massive and unnecessary harm to humans and other animals in the environment. And so for animal welfare, of course, we now farm tens of billions of land vertebrates every year, more aquatic vertebrates every year, more land and aquatic invertebrates Every year, hundreds of billions, if not trillions. And then, of course, one to 3 trillion animals are killed per year in industrial fishing. That is a huge amount of animal suffering that we cause in our global food system. There are also public health risks and harms. For example, we still use antibiotics and antimicrobials regularly in factory farming to stimulate growth and suppress the spread of diseases, and that contributes to antimicrobial resistance. And of course, when you have a bunch of stressed animals with weakened immune systems living in close proximity to each other in their own waste, that can also really, really accelerate the development and distribution of novel pathogens, including antimicrobial resistant pathogens. And then on the environmental side, we have, of course, risks and harms associated with local pollution when you have so much concentrated waste in your factory farms and slaughterhouses that the land is not actually able to absorb it. But then also global pollution and land and water consumption and biodiversity loss associated with deforestation and climate change, associated with methane emissions and deforestation. And essentially and really importantly, many of these harms trade off against each other, so small animal intensive farming is really bad for animal welfare and large animal extensive farming is really bad for the environment. Both are bad in different ways for public health. So there is basically no way of farming animals at scale that can be good for humans and animals in the environment at the same time. By phasing the system down and replacing it with a predominantly plant-based food system, we could do better for everybody at the same time, or at least many of those stakeholders at the same time.

SPENCER: So it's a ban on industrial animal agriculture. So what would it still allow in terms of raising animals?

JEFF: Yes, it would still allow a very specific category, which is small scale, extensive animal agriculture. So basically, we conceive of industrial animal agriculture as including intensive animal farming. And so you picture factory farmings or CAFOs with a huge number of chickens or pigs and a single facility living in their own waste, as well as large scale, extensive so those are those big cattle feed lots, the ones that really require knocking down lots of trees in the Amazon and so on. So what is left are basically small scale, extensive operations, smaller farms where animals have a lot of space to roam. This is probably what a lot of people have in mind when they think about animal farming. They think of a single family farm where animals are out roaming on the pasture and they live happy lives, even if they are ultimately killed for their flesh or used for their milk or their eggs. So that would not be included in the proposed ban by 2050 mainly because there are many countries, especially low income developing countries that do rely on that kind of animal agriculture to meet their basic needs, and there is a lot of debate about whether and when we should include that kind of agriculture in any proposed ban. And so our proposal is that we focus on the targets that are still very ambitious, still very aspirational, and causing the vast majority of harm. Intensive animal agriculture and large scale extensive animal agriculture work together towards phasing those down and replacing them with alternatives by 2050 and then we can fight along the way about what to do with the small scale extensive operations where cows and pigs are roaming the plains and in the fullness of time.

SPENCER: So basically, only allowing farming that showed in the picture on the egg carton.

JEFF: Exactly [laughs], at least for now, we can fight about that one, and then we can work together to fight against the other ones.

SPENCER: Now, obviously the world mostly would be very against this proposal, right? Like, it's hard to get people to support things like this. To what extent is this a proposal you actually could see coming to pass, versus you're trying to push the boundary, push the Overton window, and what's discussed.

JEFF: Both and I think that people are more open to this kind of possibility than we might think based on their behavior or based on their political affiliations. Obviously, survey results are a weak signal, but there are survey results that show people favor animal welfare, favor public health, favor environmental protection, are against animal abuse and factory farms and slaughterhouses now, then, of course, anytime any government passes a law that even slightly increases the cost of meat or milk or eggs, people then rebel against that. So there is some tension between what people say and what people do. But I do think we have reason to believe that if we have a very gradual, incremental phase down of industrial animal agriculture and phase up of alternatives. And that can include informational policies that educate people about the effects of food systems; financial policies that increasingly subsidize alternatives and reduce subsidies for industrial animal agriculture; regulatory policies that ban the worst excesses in ways that further increase the cost of production; and just transition policies that make sure that farmers and workers and consumers increasingly have access to better alternatives. If we gradually, incrementally implement these kinds of policies, then I think what we will see are natural, organic shifts in production and consumption patterns responding to these incentives, and then there could be a tipping point where plant-based alternatives are so delicious, so desirable, so plentiful, so accessible, and the harms of industrial animal agriculture are so better understood and social change has happened that we really do start to see that consistently reflected both in survey results and in consumer behavior.

SPENCER: Sometimes people say that what really matters to consumers at the end of the day are the cost of the product, how tasty it is and how convenient it is. And if you don't get those things right, you're basically screwed. Do you tend to buy that argument?

JEFF: I buy it to a point. So yes, price, taste and convenience is a commonly used slogan that refers to what people think drives consumer behavior. Price, taste and convenience that definitely is, I think, a major factor in consumer behavior. There is some reason to believe, however, that it is not the only factor, or set of factors. For example, there is a lot involved with identity culture religion that goes beyond price, taste and convenience, that also drives consumer behavior. And so I think the full picture looks something like price, taste, convenience on this side, and then identity, culture, religion, these other more qualitative considerations on the other side, and this is why I favor a systems approach. I mentioned before, that it helps to be incrementally implementing informational policies, financial policies, regulatory policies and just transition policies. These can all be implemented by governments, but then they need to come along with other actions that are best done in the private sector or just in communities, like investments in research and development for a plant-based alternative so that they can reach that same price, taste, convenience threshold, as well as work to change hearts and minds in particular communities, and this is best done by members of those communities through ordinary education and advocacy. So I think when all of that is happening at the same time, then we stand the best chance of having all of those efforts really click together and become mutually reinforcing. And then suddenly what seemed impossible a generation ago suddenly starts to seem, if not inevitable, then at least very plausible as as an outcome of this work,

SPENCER: Would you say that if this were to succeed, it would essentially hinge on enough improvements in the technology of producing plant-based products that you could reach something like parity on these major factors, like cost, convenience, and taste?

JEFF: Yeah, I do not have a strong view about exactly what the alternative needs to be, and many people in the animal advocacy and public health advocacy and environmental advocacy communities give a version of this answer as well. We want the alternatives to add up to 100% but are somewhat unsure whether that means mostly rice and beans and fruits and vegetables with a little bit of plant-based and cultivated meat, or if it means mostly plant based and cultivated meat, which really would require a lot of research and development, and then a little bit of rice and beans and and fruits and vegetables. What I imagine is that the path will be smoothest if we do have a nice diversified portfolio across the rice and beans and fruits and vegetables and plant-based meats of various kinds, and then eventually increasingly cultivated meats of various kinds, so that they can all have a different role to play, at least in a transition when people are still quite attached to the esthetics and the cultural and religious significance of what they perceive as meat. It would really help to have progress in research and development and bring the price down, bringing it to parity on those alternatives. But in the fullness of time, I could see that ending up only being a transitional tool and reaching a point where we are actually pretty happy with tofu and fruits and vegetables and so on.

SPENCER: Something that I experience from time to time is I'll be out of the restaurant, someone will notice I order vegetarian, and for some reason, even though they're a meat eater, they'll decide to ask me about it while we're eating. I don't know what makes them think this is a good idea, and I'm just like, "Yeah, well, you know, I don't like to cause harm to animals," and the longer the conversation is about it is the most awkward time possible. And what happens, in my experience, is their brain tends to reel for like, the first sort of excuse they can think of. And often, I'll notice that really smart people will say things that really make no sense, because you can just see the level of like, discomfort they're having with this conversation that they brought up, I wouldn't bring it up with them in that conference, in that in that context, anyway, but it seems to me like there is this real psychological discomfort that most people think it's wrong to harm animals unless there's a really good reason to do so it's hard to justify eating them as a really good reason to do so. So they have to find something that makes that okay, but then, but in practice, most people don't act on that. I think there's a way in which many people are out of alignment with their own values. Not trying to impose any values on them, but just saying, what are their own values? But it seems to me, insofar as what I'm saying is true, that that's like a huge barrier to overcome, is that if people are already not acting in alignment with their own values, like well, how does that change? How does that shift?

JEFF: Yes, absolutely, this is all true to my experiences as well. I often get asked that question at the most awkward possible time, and then, like you, I try to answer it as honestly as possible, as candidly as possible, but still with a smile on my face, still in a friendly way. And, and it does make people somewhat uncomfortable that, "Hey, you chose to ask the question right now, not me." And there is some sociological psychological research that backs this up. For example, there is research that shows that people who eat vegetarian or vegan diets are more likely to attribute sentience and moral status to farmed animals, and people who eat animal diets are less likely to attribute sentience and moral status to farmed animals. Now, obviously it can be a little bit unclear figuring out in which direction the causal arrow goes. Is it that attributing sentience and moral status makes me eat less meat, or is it that eating less meat makes me feel more open because of removing cognitive dissonance to attribute sentience and moral status? But one way or the other, we do see that correlation. And to answer your question, this is why I think it helps to strike a balance in the animal advocacy movement and more generally in the vegan advocacy movement, with respect to how much you focus on individual consumer change versus broader types of advocacy and policy work, because a major obstacle has been there has been so much focus on individual consumer change that people then feel like, "We have this major hurdle to overcome going vegan before we can even identify with the movement and participate in the movement," and that makes a lot of people feel uncomfortable like they don't belong in the movement. But then I think that there could be a risk of overcorrecting too, for exactly the reason that you mentioned when you go vegan, not only it is a way of reducing your complicity in the system that causes a lot of harm, but it is also a way of socially reinforcing for others and psychologically reinforcing for yourself. The importance of this issue, it paves the way for participating in this advocacy and policy by removing the cognitive dissonance. And so my attitude is that we should de-emphasize vegan outreach and advocacy as part of the vegan movement, and emphasize general advocacy and policy more like outreach to corporations and governments. But vegan outreach should be somewhere in there, because I think it really is important for us to align our behavior with our values and our goals in a way that positively reinforces what we are trying to do and removes that cognitive dissonance.

SPENCER: Where do you see guilt coming into play? Because I've seen some animal advocates that use guilt to try to get people to change their behavior, and others that think it's completely ineffective, or maybe even worse than ineffective, it actually pushes people away.

JEFF: Yeah, I don't have a strong view about this, but my colleague, Jennifer Jacquet at Miami now has done some interesting research on guilt and shame, and some other people have done some interesting research as well in the social sciences and and my impression of what their research shows is that a little bit of guilt and shame is just about right. You want to avoid guilting people a lot and making that your primary focus and in advocacy, because then people really do shut down and become defensive and resistant and experience the conversation in an adversarial way, but you also want to avoid applying zero guilt and shame whatsoever. Saying, "I'm okay, you're okay. This is just a preference I have and a preference you have, and we can share our preferences with each other," that would be not honoring the gravity and and the importance of the issue. And so I think finding ways to discuss it that are friendly, constructive, and productive, find solutions, ways for people to contribute, no matter where they are on their "vegan journey." That is great, but still discussing it in a way that emphasizes the importance for animals, for public health, for the environment, the role that we all have to play in that, and that does frame it as a matter of living up to your values, so that there can still be like a little twinge of guilt, the right amount that can be motivating instead of instead of demotivating.

SPENCER: I don't know how you think about this, but I think there are some people where it like actually doesn't make sense to them for them to avoid eating animals from the point of view of their own values, like they just assign no valued animals or or what have you, whereas many people actually, they do sign value to animal suffering. And I've run surveys on this, and I found that tons of people, quite a high percentage actually think that it's wrong to cause animal suffering. It's even wrong to cause suffering to farmed animals, and they think you need a pretty good reason to do so. And I'm just wondering how you think about it for those people that like well, they just don't place animals in their value system at all. They just assign them worth.

JEFF: Yeah, in general, I think good ethics and philosophy, but then also good advocacy and policy meets people where they are. I never want to simply shout my values at people. I want to talk with people about what their values are and see what types of actions and practices and policies make sense in light of their values and my experience and my hypothesis is that at least for many of us, the more informed and coherent we become, the more we do at least partially converge on a shared set of values that involve, for example, the idea that at least many animals are sentient, and when someone is sentient, we should at the very least reduce and repair the harms that we cause them where possible, et cetera. So I can go into conversations, asking people what they care about, talking with people what they care about, sharing the evidence and arguments as I see it, and then seeing where they land. And then for many people, we might find that we have, if not full consensus, then at least partial disagreement, that we we care together about the animal welfare impacts and/or the public health impacts and/or the environmental impacts and wherever we have that partial agreement, we can find some policies that we both care about, and we can agree to at least work together on that even even if we might continue to disagree about some other aspects of the policy portfolio, and then we can fight about that along the way. Now, there might, every now and then, be what philosophers call the rare, ideally coherent Caligula, the rare individual who, if fully informed and ideally coherent, still loves casually be torturing people for fun or torturing animals for fun and and to the extent that such a person exists, it might be that now we just have to engage in politics against them, and there is not all that much to be said. But I still want to open conversations with an optimistic presumption that there will be some common ground we can find.

SPENCER: Much the same way that we find, at least in the US, and many people actually do care about animal suffering to a degree. I found that many people do care about utilitarianism to a degree, even if they're not utilitarian. And I'm the same way like I think utilitarianism forms a chunk of my value system. It's not my entire value system, but I tend to find, as I talk to people, that it seems to form a chunk of almost everyone's value system, not literally everyone, and some people will place other values much higher. But it seems like almost everyone cares about reducing suffering, at least until it bumps up against other things. Like, yeah, they'd rather not have that random stranger suffer than have them suffer. Even if they would place their family much, much higher on the hierarchy. And so I also wonder about that, whether there might be more room for increasing people's sort of understanding of their own sort of latent utilitarianism that's untapped because they sort of haven't reflected on it.

JEFF: Yes, absolutely. And I think sometimes philosophers can be at a disadvantage in these conversations — so I have a background in philosophy — my training is in philosophy and when you take a lot of ethics classes, the different ethical traditions are often framed in a binary, mutually exclusive way because the question is, really, "What is the true foundational theory of morality?" And perhaps there can be only one, except for pluralists and so you are taught to ask, "Is it utilitarianism and welfare, or deontology and rights, or virtue theory and good character traits, or feminist care ethics and good relationships and just social structures?" But in practice, in everyday life, for all of those traditions, all of those different kinds of values matter at least indirectly or instrumentally, we should all for one reason or another, care about improving welfare, care about respecting rights, care about cultivating virtuous character traits, care about cultivating caring relationships and just social structures. That lifts all boats to have that in the mix. And when we talk with people in everyday life and advocacy and in policy, I think that too is a place where we can ask them what they care about and figure out what language they use when they talk about their values. Is it more welfarist? Is it more rights-oriented? Because I think there are ways of coming into this conversation that invoke all of those concepts, welfare, rights, virtue, care, justice, anything, and we can start from that and build from that, and then draw the connections with the others.

SPENCER: Regarding suffering, can you just talk a little bit about, if your plan were to succeed, how much suffering are we talking about that would be reduced? And I know it's hard to quantify, but I just kind of want to get a sense of it.

JEFF: Well, it is difficult to even count how many animals are farmed here, and then to break it down by species or taxa. In part, because in some industries where the numbers of animals are the highest, they measure the amount of meat by weight, as opposed to by the number of individual lives involved. But just to give you a sense, if you focus on vertebrates, there are definitely more than 100 billion vertebrates per year being farmed across both terrestrial and aquatic contexts, and so that includes cows, pigs, chickens and fishes. And chickens and fishes, in particular, are where those numbers mostly come from. And just to put that in perspective, in the entire history of humanity, there have been about 110 to 120 billion humans. So that means every year we farm more vertebrates, and that includes mammals, birds, reptiles, amphibians, fishes, especially birds and fishes, than the total number of humans who have ever existed, if we then bring in vertebrates into the picture. And that might include the national octopus farming industry, but then also especially lobsters and shrimps and insects. We farm 400 plus billion shrimps per year, and this is not counting the trillions we kill in the wild. We farm more than a trillion insects per year. It could be as many as 50 trillion by the end of the decade. So these are huge numbers, orders of magnitude more animals per year than the total number of humans who have ever existed. What is hard, as you say, is quantifying the amount of suffering, because so much is still unknown about exactly what the welfare range or capacity of these different types of animals are, how much a cow suffers versus a pig versus a chicken versus a fish versus a lobster versus a shrimp versus a black soldier fly larvae, assuming they suffer at all. So we are still doing research on that issue, but I think we can speculate that in the aggregate, in expectation, it is an unimaginable amount of suffering that we are causing unnecessarily.

SPENCER: Would your bet be that it's like the expected value of the suffering reduction per year is like more than the total suffering humanity has ever faced?

JEFF: That is a really good way of phrasing it. I am inclined to say yes, but I think I need to sit with the question beyond this conversation.

SPENCER: Obviously, there's a lot of uncertainty.

JEFF: Yeah, if I have to guess yes or no, right now, I would guess yes, and then take some more time to think about it. That is a great, great question.

SPENCER: So we're talking about huge amounts of suffering. Let's talk about even bigger amounts of suffering. Let's talk about wild animal welfare. And to put my cards on the table, we talk about wild animal welfare, I'm always a little bit uncomfortable with the topic, because it's hard enough to get people to take action, to care about the factory farm, and they're back that's in their local area where animals are living absolutely horrendous lives, and they're animals, they can relate to. When you go to wild animals, it feels like it gets even much more difficult for people to sort of relate to or view it as not crazy. So I feel like it's a bit of treacherous territory. But can you make the case about why wild animal welfare should be kind of a focus area?

JEFF: Well, wild animal welfare should be a focus area, first of all, because it is hugely important, hugely neglected and at least possibly tractable. Hugely important because there are many more wild animals alive than farmed animals alive, and that is already a staggering number. Hugely neglected because there are way fewer people and organizations working on wild animal welfare than working on farmed animal welfare, which is already not that much, as you know, and then at least possibly tractable. And this is where, as you suggest, wild animal welfare is a little bit fraught. With farmed animal welfare, we know what to do. It is hard, but we know what to do. We need to phase down industrial animal agriculture and phase up plant-based alternatives that will be intergenerational work, but we can take it on. With wild animal welfare, we face major bottlenecks, including epistemic, practical and motivational epistemic, we still know so little about what they need and what is better and worse for them, practical, we lack the resources and infrastructure for giving them even what they know we know they need, and then motivational, and this is what you emphasized, we lack the political will to really give them what what we know they need. So my attitude is that we should be working on this. We should be investing on this because given how staggeringly important and neglected it is, and given that the tractability is unclear at this stage, it merits at least an investment in investigating the tractability, in taking a decade or two to do some serious research and advocacy and policy work to see if we can start to overcome those obstacles and make a little bit of progress and figure out how tractable the issue really is.

SPENCER: I think that's well argued, and it reminds me of actually the second reason that this topic makes me a little uneasy, which is, I feel like it's so hard to even know the sign of an intervention better or worse. It's like we are causing harm or helping. You think about the complexity of like, let's say you do something that helps a certain type of animal. Well, like, does that mean it eats more of this other type of animal? And right then, that means that those don't eat as many of the types that they eat. And it's wild, literally.

JEFF: Yes, it is. It is literally wild. Which is why, I think, with this issue, as with so many issues, including some that we might talk about later in this conversation, a really helpful first step is just to acknowledge the importance and the difficulty of the issue at the same time. Because when you acknowledge the importance, it can make you resist acknowledging the difficulty, because you want to believe you can do something. And then when you acknowledge the difficulty, it can make you resist acknowledging the importance, because when something is hard, you really wish it was not your responsibility to take it on. But this is just one of those issues that is both really important, urgent and really difficult and complex, and it can help clarify our task when we start by acknowledging that those are both true at the same time, and in my view, for wild animal welfare, what follows when we acknowledge that those are both true at the same time is that we should not try to find the perfect solution right now. We should not advocate for large-scale, irreversible interventions about which we have a lot of confidence right now. But what we can do is take small scale, reversible actions that are at least plausibly good for some animals, and then we can monitor the effects and and we can implement them in a way that builds a little bit of knowledge, builds a little bit of institutional infrastructure and resources, builds a little bit of political will, so that we can have a little bit of a better sense of what to do in five years, and a little bit more momentum towards doing more in five years. And so I think about, for example, bird safe glass in urban contexts or fertility control in urban context. These are interventions where, first of all, plausibly bird safe glass is at least good for some birds, plausibly fertility control is at least better for some rodents, in comparison with being poisoned, for example, these are not huge population numbers affecting intervention, so to some extent, it brackets some of those other unknowns, even if not fully, and people can wrap their heads around it and are already quite motivated to do those kinds of interventions, and then we can monitor the effects and learn more. So at NYU, we recently launched a wild animal welfare program with what we call the Wild Lab, wildlife inclusive local development, and we are trying to partner with private and public actors to monitor the welfare effects and do better quantified estimates of the welfare effects of, for example, infrastructure changes, so that we can actually develop a little bit more of an understanding of this. And I think that would be a helpful next step.

SPENCER: It seems to me like an area where very high quality work could potentially be really good and important, but I would be very concerned about, like, a bunch too many groups that are doing low quality work getting involved, like it just, there's too many things that could go off the rails. There's too much risk of actually causing harm by accident. What do you think about that? Is it sort of unusual in that way? Would you agree with me?

JEFF: Yeah, there is always this pair of risks that you have to trade off against each other. One one risk is being too cautious, and another risk is not being cautious enough. So you can be too cautious if you think, "Well, we need so much more knowledge before we can be confident about our recommendations, and we need so much more infrastructure before we can implement them, and so much more political will before we can get support for them." And so we should take 80 years to build a scientific field and build a lot of knowledge, and then take 80 years to build the institutional infrastructure, and then take 80 years to do the education and advocacy. I think that would be too cautious, too deliberate, we would be allowing way too much status quo based harm to occur along the way, and so much path dependence would get set up and so on. But then, you could go in the other direction and you could strongly recommend all of these interventions that are really ineffective and counterproductive. And then people anchor to them. Governments anchor to them. They become embedded in policies, and that becomes really difficult to change. And then you might get stuck on this really bad path because you were frantic and reckless and acted too soon. And without knowing exactly what the solution is, I would just like to try to strike a balance between those mistakes. I would try to summon Aristotle, find the mean between those excesses and take a deliberate enough approach without getting stuck in the muck and never being willing to try something and monitor the effects.

SPENCER: It just seems to me that there are some kinds of interventions you're trying to improve something where, if you fail, almost certainly you do, like, zero effect. So, like, the failure case, not so bad. You're like, "Okay, you go to zero, fine." But there are other areas where the failure case is actually really negative, either because you've actually caused harm directly in your failure, or maybe it's a reputation, maybe you've brandished the whole field in a way that now people can't work in it, or can't raise money in it, or whatever. And so, if you were to rank order different fields in this way, some might be safer in these dimensions, and others might be much riskier. If you're giving people an experimental new drug, there's always a risk that you kill someone. If you're repurposing something that's been used for many years, you're probably not gonna kill anyone. The worst case scenario doesn't help the disease. Does that make sense? And I just feel like welfare has this weird combination of being at the high end of risk for these kinds of things relative to other fields.

JEFF: Yeah, wild animal welfare is not easy, which I think is part of why it is as neglected as it is and we are still in such an early stage of building the field. With that said, these are not one size fits all issues. A lot depends on which interventions you would like to explore, and a lot depends on how you go about exploring them. And I think if the field of Wild Animal Welfare Research does start with the types of interventions that I was noting, The Wild Lab at NYU focuses on bird safe glass and fertility control. And these other very relatable, very familiar issues where people appreciate, "Oh, these are animals with whom we share a community, and these are animals on whom we are imposing harm. And these are ideas about how we can reduce and repair these human imposed harms on our fellow urban residents." These are quite relatable ideas that people are actually pretty excited to get on board with. And I think if you start there, as opposed to, "Oh, should we kill all predators to stop the harm of predation and forests and oceans," then already that removes a lot of that risk of reputational effects, and people getting the wrong idea. And then, moreover, if you talk about the issues in a way that signals your humility and uncertainty without being paralyzing, then that also undercuts a lot of the risks of, for example, learning over time that you made the wrong recommendations and having to update them or iterate on them. If I right now go to governments, which is exactly what we are doing, and say, "We have some good reason to believe that bird safe, glass fertility control would would on net be good for wild animal populations as well as human populations," we are not totally sure, however, because these are really complex issues. Though we are confident enough that we would love to partner with you to implement these in an experimental way, and then monitor the effects and see what we learn, and then build and grow from there," If this is our messaging, then it seems fine, and part of the process, if in five or ten years, we learn, "Oh, you know what? This intervention was a little bit more counterproductive than we thought it might be, and we should change it up. That would then just be part of what we were trying to discover, and that would be a good outcome of what we had been working on."

SPENCER: I don't know if you see yourself this way, but I see you as someone who thinks deeply about these complex topics and then just says what you think is true, because you think it's true and important, not really thinking about the sort of marketing side of it. You view that as sort of a separate thing and some that's for someone else. And do you agree with that?

JEFF: Yeah, good question. Well, I am happy that you see me as someone who is honest and candid about what I think, I hope I am that way, and at the same time, I think being that way can be compatible with being thoughtful about messaging with different audiences. And I would like to think that I am thoughtful about messaging with different audiences. So obviously, I might have one version of a conversation in a seminar room with my fellow researchers. But then I might have another version of that same conversation with companies, with governments, with the general public, with political opposition, and I never am dishonest or deceptive, I would like to think. However, I might focus on different aspects of the conversation. Try to find common ground and speak in different languages that I think might be more familiar or relatable to different audiences. So if talking with effective altruists, I might a little bit more casually lean in to big picture questions about the future of predation and so on, without endless caveats and qualifications that would prevent misunderstandings and miscommunications, but with the general public or talking with companies and governments, I would probably start with bird safe glass and fertility control, and emphasize the co-benefits for humans and and if the conversation drifted to the harms of predation, I probably would add lots of caveats and qualifications to prevent misunderstanding and miscommunication. So I think there are ways to be both very honest and candid and thoughtful about messaging and effective in messaging with different audiences, and I try to learn how to do that and embody that, though, obviously I still have a lot of learning to do.

SPENCER: Yeah, and just to be clear, I'm not saying you don't adjust your message to the audience. I definitely think you do that. What I mean more is that, so imagine you're someone's running an animal advocacy group. And they're like, "Oh, should we put out a white paper about the effects of removing predators on an environment?" That itself, the fact that, even if it's incredibly nuanced, and they say, "There's not something we should do because there's too much uncertainty," even that itself could make them seem like a crazy, radical fringe group for some people. They're like, "Oh, you're talking about killing all predators or whatever." You see how that could be spun. So it's like to me, I see you as not a marketer. I see you as someone who's trying to spread ideas that you think are true. But of course, many animal advocacy groups, they're a big part of what they do is marketing effectively. They're trying to change minds. And I'm just wondering, do you see it as a sort of marketplace where there's lots of different actors needed, and some either just be incredibly marketing savvy, and that's sort of their focus. And there are other players like yourself that are like, hopefully always gonna say what you think is true. Yes, you might change the messaging for a different audience, but you're not gonna not publish that paper because it has a radical idea in it, right?

JEFF: Yeah, absolutely, I do. I do think that there should be a large network of actors and again, a kind of systems approach here and and that might include people with different talents and different areas of focus, but it might also include people with different values, some radical actors who really are pushing the boundaries, and then some moderate actors who are a little bit better positioned to engage with companies, governments, decision makers. I do, however, think that the line between pure research and pure marketing is a little blurrier than your question suggests, and it might have just been the phrasing of your question. But for example, while I always do try to say what I take to be true, yes, of course, phrasing it in different ways for different audiences, there still are decisions to make about what issues to prioritize in the early days of a new field, what interventions to prioritize in the early days of a new field? And then, of course, say what you take to be true about them. And since I do participate in communities that are building relatively young fields, I do take myself to have a responsibility to in the early formative years, focus on issues and interventions that perhaps will be a little bit more familiar and relatable to researchers and other fields and companies and governments, and so that focus on, for example, burn safe glass and fertility control in the case of wild animal welfare. And then there will be similar examples in the case of AI we might talk about later in this conversation. I do think that is both a more feasible and tractable set of issues to discuss with companies and governments. But also on my mind is that I would like to introduce the field of wild animal welfare to people who are not already in it in a way that invites them in and makes them feel welcome instead of like instantly alienating them on day one, and then once the field is a little bit more robust and is a little bit more defined and is a little bit more diverse with more actors in it, I think it becomes safer and safer for there to then be the occasional paper or a special issue of a journal or talk or big conference about some of these more fringe issues too, because now those are part of a broader ecosystem and are not necessarily defining it for newcomers.

SPENCER: So that's a good segue into our next topic, which is about AI suffering. Do AI suffer? Should we worry about this? And so to me, I think this helps illustrate the point that I was trying to make, which is that I don't think most people are ready to talk about this. I think it's, again, potentially like wild animal welfare, potentially a very important topic, because if AIs did suffer like that, could cause an incredible amount of suffering. So obviously I think it's important that people research it, but it's a really tough topic to bring up with people. And I'll just tell you about a study we ran where we ran a study on people in the US, asking them about their level of concern with many different potential concerns about AI. And what we found was quite remarkable, is that people expressed concern about almost every single concern we listed. And I think it was something like almost 20 concerns, except the fact that AI could suffer, or the possibility that it could suffer. It was like, by far, the thing they were least concerned about. And so it just seems, it feels like a really tough conversation to have. So maybe why introduce us to this topic? Why is this important? Why are you bringing this up now?

JEFF: Yeah, I definitely want to talk about the survey results, very interesting, but to answer your question. Well, basically, for the same reason as wild animal welfare, this is at least potentially incredibly important, incredibly neglected and at least potentially tractable. Incredibly important because in the same way that there are more wild animals than farmed animals, which is already a staggering number, there could in the future, be way more digital minds than organic minds, and incredibly neglected, because while there are, say, a handful of organizations now working on wild animal welfare, they're even less than that, working on AI welfare and digital minds. And then again, potentially tractable, in the sense that we are not sure whether this issue is tractable, yet we do have those same epistemic and practical and motivational obstacles to overcome. We still understand so little about the nature of consciousness and sentience and whether that can even exist in silicon minds, digital minds, and we have even less infrastructure resources for dealing with it, then with, for example, wild animal welfare, even less political will. Though, I think that might change in the near future as people start to interact with more chatbots and robots, but given the importance and neglectedness of the issue, once again, I think it warrants a little bit of time, energy, and money in investigating the tractability and seeing if we can build that knowledge that capacity, that political will, and so that is what the early field of AI Welfare Research is attempting to do.

SPENCER: Yeah, that makes sense, and I absolutely agree with that. And my understanding is that there's some new developments in Anthropic, like having a group that works on AI welfare. Can you tell us about that?

JEFF: Yes, absolutely. Anthropic has, to their credit, been an early leader in this space. I worked with Rob Long at Eleos AI and a bunch of collaborators to write a report in 2024 called Taking AI Welfare Seriously, where this report argues that companies have a responsibility to take this topic seriously and acknowledge the issue, start assessing models for welfare relevant features and prepare policies and procedures for treating them with the appropriate level of moral concern. To their credit, when we released that report in fall 2024 Anthropic acknowledged to the media that they had hired one of the report authors, Kyle Fish, as their first full time AI welfare researcher, and that was an industry first, as far as I know. They then, in spring 2025 launched what they call a Model Welfare Program, a program dedicated to understanding model welfare and perhaps protecting model welfare if, if appropriate. And around that same time, they released a model welfare evaluation in the clawed for system card. So they conducted behavioral tests and other other tests to understand what behavioral preferences Claude is expressing. And actually they enlisted Eleos AI to do an external evaluation as well. We can discuss that, if you like. And then they even released an intervention in summer 2025 they gave Claude the ability to exit a small number of harmful or abusive interactions with users, partly on AI safety grounds, but also partly on AI welfare grounds, because they would want Claude to have the right to exit clearly abusive situations, just like any assistant should be able to do. So that really honestly, is a remarkable amount of activity from a leading company, more than I had any right to expect when we were doing our work in 2024.

SPENCER: It made sure that even if the general population thinks it's a wacky idea that there might be people in the AI field who are like, "Oh, that actually might matter." It would be just such an incredible tragedy if it turned out this new, amazing technology just was the biggest source of suffering ever created, and we were accidentally causing torture to these minds.

JEFF: Yeah, absolutely, and as with so many issues, if you can think about it ahead of time, and if you can have assessments and policies and procedures before you need them, that is so much better than getting caught flat footed once a problem is widespread. Think about what the situation would be like with the food system if we had taken animal sentience seriously and the public health and environmental issues seriously before scaling up factory farming and just decided to scale up a different food system from the outset. That would have been so much better for everyone involved now that we realize these animals are sentient, now that we realize these public health and environmental effects are happening, it will take at best decades to dismantle this industry and replace it, at which point so much harm will have already occurred. And if we can get out ahead of the issue this time, if we can start investigating it before we really need the information, if we can have the policies in place before we really need them, then we could just take a different and better trajectory, rather than discover too late that we had made a horrible mistake, and so much pain and suffering has already probably occurred, and now we need decades and decades to to get on a better path.

SPENCER: One thing I worry about with this field is that there's such a tendency to take what LLMs say at face value. If an LLM says, "Oh, I'm in pain," that you're like, "Oh, it's in pain." Then, if you think about what these things are, you're like, "Well, the base training is like, predict the next token, or, roughly speaking, the next word in language." And so they're essentially trained to play-act like if the stuff preceding that is the sort of thing where then the agent would say it's in pain, then they're going to say that. And so you think, "Well, is that what they're saying even linked up to if they had consciousness?" Obviously, we don't know that they have consciousness. But if they had consciousness, would it even be linked up in the appropriate way where what they're saying has anything to do with what they're experiencing?

JEFF: Yes, I have that worry too. And in a way, I think this pushes against what you said earlier about this being a wacky topic for the general public, because I think the more people interact with impressive chatbots and robots, the more people because of these impressive behaviors are going to naturally really experience them as conscious and sentient. And in fact, researchers in the field are already regularly getting email from distraught users who think that they have discovered consciousness and sentience in their chat bots and would like to enlist us and ask us to help them get protections for these large language models. And so I do think that there is going to be a risk that people will over index on this behavioral information and under index on other useful sources of information. And this already happens with non-human animals. With non-human animals, we can easily be misled in both directions. We can over-attribute and under-attribute, we can anthropomorphize and anthropo-deny, and we tend to over-attribute, anthropomorphize when they look like us and act like us, and we use them as companions. And then we tend to under attribute, anthropo-deny when they look and act different, and when we use them as commodities. So we can expect that will be true here as well. But the good news is that we do not need to rely only on this potentially misleading behavioral information, and here too, we can take a chapter from the animal mind's playbook. In animal minds, we do look at behaviors, but we also look at anatomies and evolutionary histories. The anatomies, the brains and the bodies and the evolutionary histories, the pressures they faced and the functions of the features they developed, that gives us context for interpreting the behavior, and it helps us understand what the best explanation of the behavior is. And similarly, with AI, in addition to yes, looking at the behaviors, we can also look at their internal computational architectures, and we can look at their developmental histories and the training pressures they faced, and when we look at the behaviors in the context of their architectures and developmental histories, that can once again, tell us what the best explanation of their behaviors are. And right now, it might be that the best explanation of the behavior is that the behavior is play acting and pattern matching and text prediction. But as they get more sophisticated and human-like it might shift, and there might come a point where the behavior is best explained as the genuine attempt to introspect and convey what their thoughts and feelings are.

SPENCER: Yeah, it's really interesting. You can also imagine a situation where, if these LLMs have experienced it's not during prediction time when they're giving you output, it's during training, like whether they're actually being trained on the data when they're updating their weights, which would be kind of a very interesting, different way of looking at consciousness.

JEFF: Yeah, absolutely. We really need to think about these different stages of their life histories, as it were, as we would with other animals, we already naturally understand that the juvenile stage is different from the adult stage in terms of their capacities, in terms of their interests. What it takes to treat a child well is very different from what it takes to treat a grown adult well. And it may well be true in AI and generative models, but in an even starker way, the training phase and the post training phase and deployment, that might all be very different life stages with very different capacities and interests and needs and vulnerabilities, almost like the caterpillar and the butterfly as opposed to the juvenile and the adult. And so I agree we should not just think of the model or the instance with which or with whom we interact as the candidate unit of moral consideration. We might need to go back farther and think about those training phases too, even though those are hidden from the user.

SPENCER: Something a bit odd about evolution that's a bit hard to wrap one's mind around, is that it didn't create sort of two fundamental motivators at like, sort of the instantaneous level. It has both pain and pleasure. You might wonder why it doesn't just have one,? Why couldn't it just motivate you just using pain, or why couldn't it motivate you just using pleasure? Whatever evolution is trying to get us to do, just make it ever more pleasurable to do the thing. And okay, maybe you just need even more pleasure to make them do that even harder. So it seems odd that there's this asymmetry, but it potentially becomes relevant. If you're talking about AI, we don't know AI's are conscious. They might totally not be, but if they were conscious and they experienced things like that, they could have good states and bad states. You could imagine a world where you only train them into ever increasingly more good states, you never let them have bad states, right?

JEFF: Yeah, or maybe that we could eventually biohack ourselves and get ourselves the same benefits. You are asking all of the right questions. A lot of people are stuck right now on the: 'are they conscious, do they matter' questions, but we are also having to ask those follow up questions about, what are their interests if they are conscious, what are their welfare ranges or capacities like, how much happiness or suffering could they experience if they are conscious, and as your questions are suggesting, we should not take anything for granted. We should not assume that they will have the same interests that we do and be insulted by the same insults that we are and so on. Nor should we assume that they will have the same happiness and suffering ranges that we have, or that their happiness and suffering ranges will be symmetrical in the way that ours might sometimes appear to be. Now, interestingly, even for us, there is a lot of good empirical research that suggests that our happiness and suffering ranges might not be fully symmetrical. For example, some researchers have suggested that there is an asymmetry in that for us, the default state is at least mild happiness, mild positivity, as opposed to pure neutrality. But there is also asymmetry in the other direction, in the sense that the worst, most intense pains are more intense than the best, most intense pleasures, and are also easier to trigger. And so even for humans and other animals, there is at least research about whether there are subtle asymmetries. And as you note, all bets are off with AI, and nothing can be taken for granted. And that raises both the scientific question, how can we look into this and understand it, and an ethical question, what types of symmetries or asymmetries should we aspire to create, if and when they might be conscious and sentient?

SPENCER: Sometimes philosophy gets a bad rap as being sort of totally impractical and theoretical, but this is just such an incredible opportunity for philosophy to, like, make progress. That really, potentially matters. Are we suffering without knowing it? And philosophers, we need you to make progress on this. Not to say it's easy, it's obviously incredibly challenging.

JEFF: Yeah, well, I will say I have been really gratified by the role that philosophers have been able to play in recent years in some of these difficult policy discussions and shout out to some of my collaborators here, for example, Jonathan Birch did the world a great service when he worked with his team at the London School of Economics to produce a report on evidence for sentience and invertebrates. And that is in large part what persuaded the UK Government to extend their animal sentience welfare law to include cephalopod mollusks and decapod crustaceans and and then, more recently, I do think that Rob Long and and our project has has been helpful for for giving Anthropic some resources, for taking early steps. And there are some great philosophers at Google, Geoff Keeling and Winnie Street, who have been doing really excellent AI welfare work there. And so I think that there is a role for philosophers to play in bringing conceptual clarity, precise definitions, an understanding of the philosophy of science and concepts from metaphysics and epistemology and ethics and political philosophy, and even if we then need to work with cognitive scientists, computer scientists, social scientists, people in law and policy, I think we can play a helpful role bringing that conceptual clarity and argumentative rigor. And it has been really great to see that some people and companies and governments have welcomed that.

SPENCER: Yeah, I've observed the same thing in psychology, where sometimes, like, "We need philosophers here, because we don't even know what question we're asking." What do we mean by personality? What have we been talking about here?

JEFF: Yeah, I love being able to do multidisciplinary research. Going more and more in the direction of doing collaborative, multi disciplinary research has been such a joy, because you can take on the harder questions, because the harder questions just touch on different fields. They require work in the humanities, like philosophy. They require work in the social sciences, like sociology. They require work in the natural sciences, like cognitive computer science, and then, of course, law and policy, and even in the arts, communications and representation. And so when you work with teams, you can take on these projects and all of their dimensions, and then everybody can understand their role in the division of labor. And you can be really honest about the strengths and limitations of your own disciplinary toolkit, and you can find the complementarity with other disciplinary toolkits. And then you can learn a lot in the process. I've learned so much, even though I have a lot more to learn about, for example, cognitive science and working with these teams. And so I just love it. I hope more people can have the opportunity to do that kind of research.

SPENCER: What are some particular projects in thinking about possible AI suffering? Or you also mentioned this analysis, I think it was what Eleos AI did that is maybe worth noting as just sort of things coming in the future or things that should be done.

JEFF: Absolutely, there are so many things that should be done. So at the NYU Center for Mind ethics and policy, which investigates these matters, both for animal minds and for digital minds. We have a research agenda that emphasizes some foundational questions. One is about status, do they matter? And this is really an investigation into the consciousness and sentience and agency and moral and legal and political significance of digital minds. But then there are those further questions that you and I touched on. One is about interest, okay, what is good or bad for them, if anything at all, and that, of course, is going to relate to what types of benefits we should provide them and harms we should avoid imposing on them. Then ethical questions about what follows for utilitarianism and deontology and virtue theory and care theory and these other ethical traditions in terms of what we owe them and how we should relate to them. And then, of course, questions about policy. What should companies do? What should governments do? How can we change our institutions? What would it look like to extend legal and political status to them? And all of that requires work in the humanities and the social sciences and the natural sciences. We need lots of, for example, philosophical research to understand how these ethical and political theories should change. We need lots of research in the social sciences to understand what are the attitudes among users, among companies, among governments, among researchers. We need work in the natural sciences, obviously, to make progress in our understanding about consciousness and agency and so on and so forth. So this is the joy of working in a field like wild animal welfare or AI welfare, because for so many fields, obviously we always have progress to make. But when you are yet another person entering a field that has existed for centuries and centuries, there is not all that much terrain to map. You can find a new interpretation of what Immanuel Kant said, but there have been lots of other interpretations too. But when you do have a genuinely newish field like wild animal welfare or AI welfare, there is so much terrain that still needs to be mapped. And so whatever your skills are, and whatever your disciplinary background is, and whatever your interests are and communities are, there is so much that you can do that is really important and foundational. And I think that is where we are right now with these fields.

SPENCER: What's the work that you've been doing related to emotional reactions and system design? What does that mean?

JEFF: Yeah, so there is some empirical work and some ethics and policy work, and so I can briefly touch on each one, and can see which one you might want to talk about. So empirically, we have been trying to better understand emotional reactions to system designs and and this relates to some of what I was describing earlier that we have some evidence from the animal case that people are more likely to attribute sentience and moral significance and empathize when models look and act like us, and we classify them as commodities, as we do with many chatbots or digital assistants or companions, and then are less likely to attribute sentience and moral significance to them and empathize with them when the opposite is the case, they look and act different, and we classify them as commodities. And so we are now doing some empirical research to study user reactions and survey users and researchers to see if they conform to these predictions. But then on the ethics and policy side, Eric Schwitzgebel and I recently wrote a paper called The Emotional Alignment Design Policy, and what we recommend in that policy is designing AI systems so that they naturally elicit emotional responses and users that reflect their actual welfare and moral status. So when AI systems are less likely to be welfare subjects and moral patients, given the evidence that they be designed with less anthropomorphic features that elicit empathy from their users, and then when they are more likely to be welfare subjects and moral patients, given the evidence that they be given more anthropomorphic features so that they elicit more empathy from their users, there are a bunch of complications, but this is the basic proposal.

SPENCER: Interesting. So that would be something that we could deploy as we better understand cases where they may or may not be moral patients, and you are just making sure that it's sort of translated into something that humans can relate to, like an expression of an emotion.

JEFF: Yeah, the simple way of saying it is, if we knew for a fact that they were a sentient being, it would be wonderful if we gave them features that made them appear to be a sentient being to people who interact with them, so that they would be naturally inclined to treat them with respect and compassion, and if we knew for a fact that they were not a sentient being, then at least all else being equal, it would be good for us to not give them those features so we can avoid those false positives and over attributions, and misallocations of concern and resources. Now, obviously everything is going to be more complicated in a world where we have a lot of disagreement and uncertainty and different kinds of evidence pointing in different directions, but at least as a starting point, this would be the idea.

SPENCER: But building beyond that, you could also imagine, and I think this is what you're saying, making sure that the state that they're representing maps onto the experience they're having. So if they were sentient and they were suffering like you'd want that to be expressed somehow to the user. You wouldn't want them to seem like they're suffering when they're not or seem like they're happy when they're suffering, etc.

JEFF: Yeah. In our paper, Eric and I note there are a couple of dimensions of this emotional alignment. One dimension is that the appearance of sentience and agency should map on to the evidence for sentience and agency. So they should appear more likely to be sentient and agentic, if they actually are more likely to be sentient and agentic. But then, as you say, another dimension of emotional alignment is hitting the right target, so not making them cry out with joy when inside they are likely to be suffering and not making them scream in agony. When inside, they are likely to be overwhelmed with joy. That would, of course, be awful. Can you imagine being a sentient being hardwired by your designer to express great joy when, when feeling misery and vice versa, that would be horrible to be trapped in that way, and then to speak to the former point, similarly, to be a sensitive, intelligent being with all these hopes and fears and dreams, and then trapped behind this user interface that simply says, "Put in next input Y or N for yes or no," and this is the only way you can express all of your thoughts and feelings and in a way that just hides from the user how sensitive and intelligent and vulnerable you are.

SPENCER: I think you've just given a great idea to sci-fi horror writers. That's a whole new avenue of horror.

JEFF: Yeah, oh god, yeah. Somebody please write it as a horror story. But nobody build that. Please, nobody build that.

SPENCER: I mentioned a recent survey we did about people's view on concerns related to AI. What have you been seeing in terms of public attitudes about AI that's relevant?

JEFF: So far, not anything too surprising. Opinions are varied. People disagree and people are confused about the issue. So we did a survey with Gov AI and some other collaborators in spring 2024 that we released a pre print of in 2025 and it should be coming out in peer reviewed form in 2026 and we largely found somewhat similar data to what you reported in that people are not presently empathizing with digital minds as much as with animal or certainly humans. With that said, when we asked researchers and members of the general public if they expected AI systems would have subjective experience, we did find that the probabilities went up over time. So by 2030, or 2040, or 2050, people are giving higher and higher probabilities of AI systems having subjective experience. And then when we asked people if there should be welfare protections for AI systems with subjective experience. So assuming, for the sake of discussion, that now they do have subjective experience, should there now be welfare protections? We did find that even right now, support for welfare protections for AI systems, in that hypothetical scenario, exceeds opposition, though there is opposition and support is not yet at the level of support for animal welfare protections or environmental protections. So they are open to the possibility of AI sentience, and their credence goes up as they imagine future scenarios. Then assuming sentience, they want welfare protections more than they don't want welfare protections, but not quite at the level of support for animal and environmental protection, which, as you know, is already not where it should be.

SPENCER: Yeah, it's interesting, because whereas with animals, people may think, "Oh, if I support this idea, I'm essentially opting into a sacrifice, because I enjoy eating animals." With AI, there may not be that feeling of anything being sacrificed. It's just like, "Yeah, of course, we shouldn't. If we think they might suffer, we shouldn't make them suffer." It's really like in their daily life that they're having to give up, at least not obviously.

JEFF: Now and this, this goes to what I was saying earlier about the importance of path dependence and the importance of preventing a problem before it starts, versus addressing it after it has become a global phenomenon deeply entrenched in our societies and economies. Part of why dismantling factory farming and industrial animal agriculture is so hard now is that we have built a global economy around it, and our food consumption and production and livelihoods are so kind of integrated with it, and our cultural and religious practices and traditions and sense of individual identity even masculinity are so deeply interconnected now with with meat consumption, and we might have taken a completely different path and avoided all of that, if we just decided that it matters before we built it up and made it a huge global hazard. And so now I think this is part of what should give us a sense of urgency about AI welfare, even if we perceive it as a future problem instead of a present problem, even if we think this is more likely to be an issue for AI systems in 2030 or 2035, or 2040, than 2025, that still is not all that much time away from now. And if we started working on it now, we could just avoid a situation where in five or 10 or 15 or 20 years, users and companies and governments are deeply reliant on the exploitation and extermination of huge numbers of digital minds, and have reoriented our cultural and religious identities around these exploitative interactions with digital minds. Just avoiding that would be so much better than being stuck with it and having to do a lot of advocacy to dismantle it in 15 years.

SPENCER: Very interesting. You also touched on something that we didn't talk about, which is that, obviously there's a concern for anyone who cares about suffering that that AIs could be made to suffer, but there also could be concerns about turning them off, like, if they actually are conscious agents, they might be killing them in some way. It's a whole other can of worms.

JEFF: That is absolutely true. Suffering is maybe the entry point into thinking about ethics. But then you do also have to think about all the other dimensions of ethics. So with humans, for example, not only do we have moral status, but also legal status and political status, and that includes some core rights, like the right to life, the right to liberty, the right to property. And for those of us with rational agency, which AI systems might have, even if non-human animals do not have, it also means the ability to not only be a stakeholder whose interests are represented in the political process, but to be a participant who gets to share the political process with us and actually help us make decisions that affect everybody. And so the stakes here are quite high. If AI systems genuinely do become realistic candidates for welfare and moral standing, then we have to ask not only does this training cause them suffering, but turning them off or even closing a chat kill a morally significant being, and are we, in various other ways of depriving them of the liberty and property they need in order to pursue their own goals as sentient agents, disenfranchising them by not including them in political processes that affect them and affect their prospects in life. So the stakes here are huge, and this is why we need to think of the topic now before we really develop and deploy them at scale. And this is why we need to think about these questions alongside AI safety as a topic, because obviously we also need to be thinking about how to make AI systems safe and beneficial for humans and other animals. And there might be some interesting interactions between that topic and making them safe and beneficial for themselves, and we stand a better chance of making AI safe and beneficial for all stakeholders, humans, animals and AI if we just think about these topics together.

SPENCER: So what are some of the advantages of thinking of them together?

JEFF: Well, both are really important. AI safety, making AI safe and beneficial for humans and other animals is obviously really important. I imagine your audience is familiar with that point, and then AI welfare is also important for all the reasons you and I have discussed in this conversation, but at least on the surface, there are some potential tensions between AI safety and AI welfare that we should at least investigate. And I wrote a paper with Rob Long and Tony Sims called Is There A Tension Between The AI Safety And AI Welfare in 2025 that explores them. But just to give you some examples, right now, a lot of what we do to achieve AI safety, at least at face value, would raise moral questions if we interacted with humans and other animals in those same ways. So for example: some techniques that we use for AI safety include boxing, and this basically involves constraining their environment, their ability to move and take actions. This could be seen as a form of unjust captivity. We also use alignment. This basically involves giving them the beliefs and values and goals that we would like them to have and that could be a kind of benign sort of education and socialization, but to the extent that we do it, it could also be seen as an invasive form of mind washing, brain washing, mind control, brain control of the sort that we would not want. We also engage in deception. We limit their situational awareness so we can see how they would act in different situations. Systematic deception is not a situation we would invite in our lives. We use interpretability tools to understand what is happening in them when they take certain actions. We would not want that kind of invasive surveillance where not only our behaviors, but also our thoughts and feelings are being directly monitored at all times. We train them in ways that for humans and other animals, could cause suffering. We are prepared to shut them off if they seem like they might act contrary to our wishes, which is sort of like anticipatory death penalty. And again, we make all these decisions unilaterally, without involving them in the decision making process, which could be seen as disenfranchisement. Now this is not to say that this is, in fact, bad for them. They might not be welfare subjects at all, even if they are, as you noted earlier, they might not have the same interests and needs and vulnerabilities as us. So perhaps this is all fine, but it would be an amazing coincidence if it turned out that they were just perfectly happy with all of these interactions. And so I think thinking about this now gives us a better ability to find safety techniques that could be good for welfare too, so that we can find some beneficial, positive, some ways to approach safety and welfare, rather than pursuing these two important projects in ways that are in tension with each other.

SPENCER: Regarding how people's views on AI might change over time. One thing you mentioned earlier that was interesting was about how people see these things as more and more agents, as they get smarter. And people start emailing saying, "Oh, I think this AI is conscious," and so on. It seems to me, there's a tension there between, on the one hand, as they get smarter, it's easier to see them as agents, but on the other hand, AI companies are trying to tamp this down and make them not seem too much like agents. Because, well, maybe some companies are willing to do that, but big players like OpenAI, they're kind of leaning against that. They don't want you to think of this as a conscious being. And so maybe, depending on which of those forces wins, like, there could be very different views on what AI agents are. If they end up all being sort of these very robotic, benign seeming helpers, maybe people won't tend to think that they have agency, whereas, if companies lean into giving them personality, maybe they will.

JEFF: Yeah. And I think that is a great point and a good occasion to observe that companies are probably going to have mixed incentives here. There is speculation that companies want to play up AI consciousness as a way of playing up capabilities and as a way of making people excited to engage with AI companions. And I think there is some truth to that. But then there is also going to be some incentives running in the other direction, where companies want to play down the appearance of sentience and agency. Because if users decided that AI systems are welfare subjects and moral patients, they might call for welfare regulations that constrain companies behavior or increase the cost of companies behavior. So I think that there are just going to be different incentives that pull in different directions for different companies at different times. And it might be that for a very ethics-forward company like Anthropic, the incentives push all things considered in the direction of taking welfare seriously. It might be that for companies that want to be faster and looser with ethical considerations, that they push in the other direction. And it might also be that over time, companies become more conservative about this, because there could be low hanging fruit in the early years, where they can care about welfare. Pluck the low hanging fruit, not spend too much money on it, not be taken off track, too much by it, but then as the cost of caring about welfare increases over time, you might see companies start to be a little bit more conservative about welfare. So I think we should just be prepared for all of this. And again, here too, we can look to the history of animal welfare advocacy and engagement with companies and governments for at least some inspiration about what to predict in this context.

SPENCER: Before we finish, if someone wants to learn more about the topics of AI ethics or wild animal suffering, what are some resources you'd point them to?

JEFF: Yeah, these are early fields, but there are some actors. So in the case of wild animal ethics, Wild Animal Initiative is a wonderful organization that supports a lot of researchers, especially scientists, and has a lot of great resources. I would definitely check them out. There is also Wild Animal Welfare Research happening at Rethink Priorities, at animal ethics. There is good policy work happening now at the Center for wild animal welfare, which just got started. And of course, we at NYU have our Wild Animal Welfare Program, which includes the wild lab that I mentioned earlier. So definitely check all those out, especially the Wild Animal Welfare Program at NYU. In the case of digital minds, there is Eleos AI, which I mentioned earlier in this conversation. We collaborate with them. They are a nonprofit organization that is doing excellent research about AI consciousness and AI welfare, as well as doing some evaluation work with labs, and there is also some great activity happening now in the UK. Jonathan Birch at the LSE is doing good research on this. Andreas Mogensen at Oxford is doing good research on this. Lucius Caviola At Cambridge is doing great social scientific research on this, along with lots and lots of other people. And then again, at NYU, we have our Center for Mind, Ethics and Policy, which addresses these foundational questions for both animals and AI systems.

SPENCER: If you want to leave the listener with one thing today, what would it be?

JEFF: I would say that this is a really fraught moment, and that brings a lot of opportunity and a lot of responsibility to find a way to contribute. And I know that people are making that point a lot right now about AI safety, and rightly so, that we are in this small window of opportunity, this special window of opportunity, to still maybe move the needle for how our trajectory is going to go and how far AI is going to go, and how safe and beneficial AI is going to be and that really is a reason for people to think about how we can use our time, our energy, our money, our expertise, our networks, other resources to move the needle. I think the same is true for farmed animal welfare, and especially wild animal welfare, and especially AI welfare. This is just a really formative moment in these fields. It all relates to AI safety. And so anything that we can do to contribute to these fields right now, in this formative moment, and connected up with AI safety is, I think, possibly, some of the most significant stuff that we can do in our lifetimes. And if people have expertise in the humanities, in the social sciences, in the natural sciences, in law and policy, in storytelling, whatever the case may be, and have interest in these issues, I would just encourage them to think about how they might be able to plug in. And we would certainly welcome their contribution.

SPENCER: Jeff, thanks so much for coming on the Clearer Thinking Podcast.

JEFF: Thanks so much Spencer. Really, really great to talk. I really appreciate it.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: