CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 216: What do socialism and effective altruism have in common? (with Garrison Lovely)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

June 27, 2024

What does effective altruism look like from a leftist / socialist perspective? Are the far left and EA the only groups that take radical egalitarianism seriously? What are some of the points of agreement and disagreement between EA & socialism? Socialists frequently critique the excesses, harms, and failures of capitalism; but what do they have to say about the effectiveness of capitalism to produce wealth, goods, and services? Is socialism just a top-down mirror of capitalism? How difficult is it to mix and match economic tools or systems? Why is the left not more tuned into AI development? What are the three main sides in AI debates right now? Why are there so many disagreements between AI safety and AI ethics groups? What do the incentive structures look like for governments regarding AGI? Should the world create a CERN-like entity to manage and regulate AI research? How should we think about AI research in light of the trend of AI non-profits joining forces with or being subsumed by for-profit corporations? How might for-profit corporations handle existential risks from AI if those risks seem overwhelmingly likely to become reality?

Garrison Lovely is a Brooklyn-based freelance journalist with cover stories in The Nation and Jacobin and long-form work in BBC Future, Vox, Current Affairs, and elsewhere. He has appeared on CBS News Sunday Morning, The Weather Channel, The Majority Report, and SiriusXM. He hosts the podcast The Most Interesting People I Know. His writing has been referenced in publications like The New Yorker (by Ted Chiang), ProPublica, New York Magazine, The New Republic, and GQ. Read his writings on his Substack; learn more about his work at his website, garrisonlovely.com; or email him at tgarrisonlovely@gmail.com.

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today! In this episode, Spencer speaks with Garrison Lovely about leftism and socialism, Garrison's experience working at McKinsey, factions in the AI community, and regulation on AI capabilities.

SPENCER: Garrison, welcome.

GARRISON: Thanks so much for having me.

SPENCER: You have a very leftist socialist perspective, yet you come to conclusions that are very effective altruist in nature, which I find very unusual. Usually effective altruists, while they are left-leaning, they don't tend to take a socialist perspective. And yet you end up with similar ideas from this different angle. So why don't we start there? Tell me about your worldview and your philosophy and how you end up approaching effective altruism from this different angle.

GARRISON: Yeah, I guess I radicalized to the Left and Effective Altruism around the same time, which is unusual, but I didn't think of them as being in tension. I think the election of Donald Trump and working at McKinsey and serving some unsavory clients just made me really open to radical different worldviews, and I became dissatisfied with the Establishment. I think of both being — at least in my view of them — being radically egalitarian. I think it's pretty wild that people around the world are not treated equally and people prioritize how they donate. I understand that governments are going to prioritize their own citizens but, just as individuals, I don't see why I should care more about people in my own country than people overseas. And only the Left and Effective Altruism really seem to take that seriously. That leads to different conclusions and they prioritize different things, of course, but that was the thing that really united them in my mind. Another thing is, wealth inequality is just pretty frustrating. There are people who are dying from preventable diseases whose lives could be saved for $5,000, and there are people that have billions of dollars, thousands of times more money than they could ever conceivably consume in their lifetimes. EA looks at that and says people should donate more money. And socialists look at that and say our society is failing at a moral level to allocate resources in a way that promotes maximum benefit to everybody. I'm pretty sold on the core ideas of Effective Altruism, of trying to help others using evidence and trying to be impartial in how to help other people when you're trying to help them. And then I think socialism is a good way of understanding power in the world and why some people have so much money, and why the world arrangement is the way that it is. I think EA doesn't really ask those kinds of questions in the same way. It just looks at the world as it is and says, "How do we most improve it?" I'm happy to go into that in more detail. I think it's a really, really big topic.

SPENCER: You point out a couple ways that these two worldviews align. One is this radical inclusiveness, that someone on the other side of the world who's suffering deeply matters and shouldn't be dismissed just because they're not in your country or in your community. The second is how effectively wealth can be used to help the world, and yet a lot of wealth is used frivolously. But there's surely a lot of tension between these worldviews as well. What do you see as some of the biggest tensions?

GARRISON: I think there are tensions more in how these things cash out in practice, rather than in principle. Maybe the biggest thing is that, to be a leftist, I think, is to be an anti-capitalist; it's almost an axiom of being a leftist. Effective altruists are more cause-neutral. They just go, "Okay, if I'm going to be impartial, if I'm going to try and help others as much as possible using evidence and reason to inform my decisions, what do I do?" And there are no upstream commitments besides that, more or less. And so that narrows the leftist approach to things that look at capitalism and look at the labor management relationship as core modes of analysis and core areas to intervene on. And one other tension in practice is, Effective Altruism has — historically and still now — relied on billionaire philanthropy, and I think there's a revulsion to billionaire philanthropy on the Left. Within the Left, there's this focus on mass movements and doing mass politics and activism; whereas, Effective Altruism has way fewer people involved in it, and it's more focused on technocratic kinds of fixes, or finding very specific points of leverage and influencing people in power behind the scenes, more so than doing big protest movements; although, there are some EA-related people who are now protesting AI companies and moving in that direction, and there are plenty of Effective animal advocates who do protests and mass mobilizations to fight for better conditions for farmed animals. So I don't want to pretend that there's a hard line between these things, but this is just something that happens to be a tension, I think, in terms of the approach each of the groups take.

SPENCER: On the anti-capitalist point, well, Effective Altruism tends to work through nonprofit mechanisms. I think most effective altruists are trying to improve the world. They're doing that through nonprofit ventures, not for-profit ventures. At the same time, I think most — but not all — effective altruists would say that capitalism has been a major force for making the world better, that it's pulled an enormous number of people out of poverty, and that, even if a lot of companies are not helping and some companies are harming, the overall infrastructure of capitalism has just improved quality of life to a tremendous degree. I imagine that socialists would have a different perspective. So I'm wondering what would the typical socialist view on that question be? And then, what is your view on it?

GARRISON: It's a good question. Marx marveled at the productive capacities of capitalism. Industrial capitalism was just very effective at producing wealth and goods and services, and it was the distribution of that wealth that a lot of socialists take issue with. I think that the Steven Pinker view of the world — this idea that capitalism has raised tons of people out of poverty — is something that a lot of leftists, I think, reject, or at least are highly skeptical of. I don't want to speak for the whole group, and I think it's a complicated conversation, but my view on it is something like, markets are very effective as a means of exchange and a way of sending price signals. And capitalism uses markets, but that's not the only way you could have markets. You could have a market socialist economy. You'd still use markets, but ownership is rationalized to maximize public benefit. If you look at the global economy, you have poor countries paying billions of dollars of debt money to rich countries through these pretty predatory and abusive international arrangements. And then you also have China, which is the source of most poverty reduction in the last 30 years. And capitalism defenders will say, "Well, China became more capitalist and market-friendly, and that's why they became so wealthy. But China's model is definitely not one of free market capitalism either. And so I think that creates some problems for this view that neoliberal economics is the path to poverty reductions around the world. I just think we can do better. I think you can take a lot of the best parts about capitalism and market economies and then just add on better labor protections through allowing worker cooperatives or co-determination where workers are on the boards of companies, robust welfare states, universal health programs. And if you look at the countries that have the highest standard of living in the world, the best life outcomes, they're countries that are market economies, but have moved in a social democratic direction, like the Scandinavian countries. There's a good article in "Current Affairs" which talks about the data pointing towards socialism being good where, if you look at measures of how involved the State is in the economy... In Norway, it's higher than Venezuela and Norway has really good outcomes. Sweden similarly, has really good outcomes without oil wells to support it. And I think the United States is a good example of a country that is extremely wealthy but has really mixed outcomes considering that: lower life expectancy than countries that are not as rich as it, higher murder rates, lots of problems that aren't all attributable to the extent it's capitalistic, but I think having a better welfare state and better labor protections would improve the lives of people here.

SPENCER: I think a lot of people that are left-leaning, even if they're not on the far Left, would say, "Yes, in a country like the US, we could improve healthcare. We could give better coverage to people. We could, in many ways, put checks on the market that would make things better." But the vast majority of them would reject the idea that capitalism is the fundamental problem. They might say capitalism is actually good; it's just that it can go too far. We need to put it in check. Whereas, socialists, as I understand it, often want to dismantle capitalism. And so I'm wondering: Is that a fair characterization of socialism, and where do you fall on the dismantling of capitalism?

GARRISON: I think dismantling capitalism is one of the core goals of socialism. That phrase is so big, it almost doesn't mean very much because, if you look at leftist politicians in the United States — like Bernie Sanders and AOC — they're really pursuing reformist social democratic policies. What that looks like in practice is, Medicare-for-all basically destroys the health insurance industry in the United States and replaces it with a government insurance program, a single-payer system. And so you're taking a large fraction of the economy out of the hands of private capitalist markets. And you can say that's a step towards dismantling capitalism. I don't think that's wrong or crazy. And then there's some people who want to blow up the current economic system and replace it with something else, whether that's more central planning from the State, or nationalized industries, or just making it much easier to have workers form cooperatives or be on the boards of companies. I favor something like taking steps to take things out of the market where the market is not doing a good job of providing them. And healthcare is a great example of that, where the United States pays more for worse outcomes when it comes to healthcare than basically any other country. And I think that radically changing everything at once just doesn't often work out well. Noam Chomsky is a famous leftist, but he considers himself a Burkean leftist, where he really does have some humility around changing too much, too fast, and thinks that things should go in a direction. But I think there are more ways for things to go wrong than for them to go right. And to the extent society works, it's made up of a bunch of decisions that are compounded over time. And so I think we should just have some respect for that reality while moving firmly in a direction that improves the welfare of people, and prioritizes that over private gain in financial markets.

SPENCER: What a lot of people say in response to things like that is that, in trying to regulate these systems, we don't necessarily make them better. We actually often make them worse by increasing transaction costs, adding red tape, preventing price discovery, et cetera. And so in order to make them better, you need to not just see a problem, but you have to actually have a solution that's better than the imperfect current solution. So you need a level of understanding, a level of competence. I'm wondering how you feel about that, like if the current government were to (let's say) change to a single-payer system, how confident are you that it would actually be an improvement?

GARRISON: I think health insurance, as it is in the United States, is an incredible example of horrible bureaucracy that makes things worse, more expensive, more frustrating for the people involved, with very little to show for it. I'm not on Medicare, but having something that's free at the point of service, from the consumer's perspective, is just much better, I think. You don't have to do this thing where you go to the doctor and then you leave, but you ask, "Oh, is there anything I need to do?" And they're like, "Nope." Then you leave, and then get a $150 bill in the mail, and you have to figure out how to pay it. And that's with insurance, And then if you're uninsured... I tried to get a flu shot and a COVID vaccine in January at my local CVS and they wouldn't let me because I didn't have insurance. And I couldn't even pay out-of-pocket for some of these things, versus in countries that have single-payer systems or national health systems. There might be problems there, like the NHS in Britain has wait times for certain procedures. But right now, we are also doing triage in the United States, and we're making those choices just based on how much money people can pay. So I think it depends a lot on the specific policy. The FDA has been criticized a lot for how it handled approving COVID tests. The United States had two COVID tests and some European countries had dozens, and they were much cheaper. And Europe is — overall, I think — a much more heavily regulated place than the United States. But I don't think we can just look at things as, there's a dial with more regulations being bad and less being good. I think that each problem requires a different solution, which is so basic, it's not even worth saying. And yeah, I think there are ways in which State intervention can just make things a lot better and smoother, like in the case of moving to a single-payer system where you have layers of bureaucracy that are not adding any value to healthcare in the United States, that could be removed by having one payer. And Medicare already gets things at cheaper rates because they have more bargaining power. There are cases for just natural monopolies being State-run rather than given to the private sector. In New York, we have the worst of both worlds with utilities being State-sanctioned monopolies, where you just have no competition, and it makes sense to not have multiple people running electricity or plumbing or whatever it might be, and you still have private sector profit-seeking happening. And I think people are not happy with Con Edison or National Grid for very good reasons and it's very frustrating to have this very bad service that you cannot choose, and then have people making money on top of that off of you. And if you've ever tried to pay your bills on the National Grid website, you understand how frustrating this can be. That just being a State-run monopoly, just a government thing, I think, would be an improvement. And the government can do things competently. NASA exists and puts people on the moon. And it can also do things really badly, like the healthcare.gov rollout was really bad. And just a little anecdote there: McKinsey, the consulting company, advised the US government to rely on contractors as much as possible back in the 50s, when the US contractor state was spinning up. They actually did this with NASA as well. And then in the 2010s, during the Obamacare healthcare.gov rollout, they warned that this rollout is at risk of failing because the government is over-reliant on private contractors and has not built up its own capacities sufficiently. Yeah, I think there are really complicated reasons why the State works and doesn't in different scenarios, and you really just have to look at the specifics. There are examples around the world of governments doing things really effectively and governments bungling things. But that's also true of private businesses, and we can craft smart policies to take this into account.

SPENCER: Doesn't the tension between socialism and capitalism say, on the socialist side, that government, on average, is going to be the better way to do things, or top-down solutions are the better way to do things? Or is that a misunderstanding of socialism?

GARRISON: There's just so many varieties of socialism. There are people that just want to have worker cooperatives in a competitive market economy, where the workers own parts of their company. And so, there are incentives that are aligned, and they get reward from succeeding, and that is actually not super top-down, versus some people want to nationalize everything, and that's a more top-down approach. And I think overall, there's no consensus; I won't pretend to say there is. But I think people say you should nationalize some industries — natural monopolies, as I mentioned, being a good example of this — and then others should be left to worker cooperatives. There are also interventions, like I mentioned, like worker code determination, which is used in Germany. This is where workers are represented on the boards of the companies that they work at, and this is mandated by law. And in Germany, there's been way less of a divergence between productivity and wage gains than you've seen in countries like the United States. The idea there is, the worker representation in how corporate governance happens, actually changed the share of income going to labor versus management in a way that just kept wages rising higher. You could call that top-down, I guess, but that doesn't really feel like the right way to understand that, in my mind.

SPENCER: It seems like not all the solutions are top-down. Some of them are even bottom-up solutions. But they still have an element of empowering individuals more than empowering companies.

GARRISON: I guess probably empowering labor would be the framing that leftists would use more. That is another key marker of left-wing thinking, is thinking about the collective and not taking individualistic approaches towards things. One of the core sources of left-wing power is labor power, and solidarity is a term that is often used, which is working together with your fellow workers, or whoever it might be, to coordinate your efforts to push back against some smaller source of more concentrated power.

SPENCER: When we think about the trade-off between solving things through regulation, or government intervention versus solving things in a free market capacity, I tend to think of these as two powerful tools. And if you go to your toolbox of tools, you're like, "Well, okay, if I'm hitting a nail, I want a hammer. If I have to screw something, maybe I want a screwdriver, etc." That's how I approach this, as this toolkit. There's certain things that complete free markets are incredible at, like when you have a whole bunch of complicated prices for lots of things that you need to set. They're really, really good at setting prices, for example. There are other things they're really bad at, like if you want to control inequality. If you just do a complete free market in a system, you'll tend to find that inequality goes up in that system, for a lot of reasons, due to the dynamics of free markets. So I guess for me, I start with what is the problem we're trying to solve. And I say, "Well, which tool in our toolkit gets to the solution?" If I think of something like healthcare, I think of it as something where you want both tools involved. I have very low confidence that I know how to solve healthcare in the US, for example. But if I had to guess, my best prediction is that we want some combination of the two where, for example, a system that gives you free healthcare for things that are highly evidence-based and cost-effective, but then leverages a free market for everything on top of that. For any kind of less proven treatments or less cost-effective treatments, we have a total free market with the goal of helping to prevent loss of innovation. You still want lots of innovation happening and lots of money in the system. Also, it helps address the issue that the big bureaucratic system that's deciding what is cost-effective, maybe it's too slow to adapt, or maybe it's going to miss out on some things that are actually beneficial. So I think of it as, how do we fuse those two systems? I'm just curious: how do you see it differently than as I presented there?

GARRISON: I think the system you're proposing there would probably just be a huge, radical change from the status quo in the United States as is because I would guess that the vast majority of care people are receiving in the United States is some proven thing. It's some treatment that we know is the best standard of care for whatever malady you might have, whatever it might be. And so, yeah, I think that would get you most of the way there. I haven't thought a ton about this issue in a little while, and dug into how people want to address innovation, because if you could just bring down the cost of hip replacements for everybody to what Medicare pays for them, that would be fine, I think. But for some treatment for something that we have not yet figured out if it works, you don't necessarily want the government to be spending millions of dollars per patient on something that we're not sure of. But you also don't want to prevent future innovation from happening. I don't see an obvious in-principle problem with what you're proposing there. But I think, politically, it would be such a fight to even get to that first part; the health insurance industry is just going to fight you so, so hard. I think people on the Left, by and large, would be like, "Yeah, if you're covering everybody and making sure that they all get the standard of care for various maladies, and they don't have a cost at the point of care," what you do beyond that, I don't think people have strong views on, on average.

SPENCER: Okay, because I would have thought they would have a strong negative attitude towards the completely free market component of like, "Oh, and rich people get to buy the cutting-edge new treatment that everyone else doesn't have access to because they can't afford it," etc.

GARRISON: I think there would be an aesthetic revulsion to that. I think that's probably true. Maybe I'm not modeling people's thoughts here very well. But if those things were already not covered by existing insurance, or only by really good insurance, what you're proposing just seems like such an obvious improvement over the status quo.

SPENCER: Before we move on to the next topic, you mentioned briefly that some of your leftist thinking was born out of your direct experience working in consulting. Do you want to just tell us one of the stories that was influential to you in changing your thinking?

GARRISON: It's funny, my really formative experiences while working at McKinsey were serving government clients, actually — specifically Rikers Island and ICE — and I've written about this in "The Nation." My experience at ICE was, we were there doing an HR project, basically an organizational transformation to improve the HR function at the agency. And then Trump became president, and the focus totally changed to hiring more deportation officers. And Trump also issued an executive order that basically targeted everybody who was in the United States without documentation for deportation, and so this is a really big change. I and others were really freaked out about this and worried about it, and we raised these concerns to the leadership on the team on the McKinsey side. We had this meeting of the whole McKinsey team. The senior partner on the project, Richard Elder, said a bunch of McKinsey partners didn't agree with Obamacare, and they helped implement it nonetheless. And we have to do our duty here, basically. "We just do execution. We don't do policy," is what he said. And in response to this, I asked, "What would have stopped McKinsey from helping the Nazis procure more barbed wire for their concentration camps?" And he said something about McKinsey being a values-based organization, and if you look into it, those values at the time said nothing about doing good in the world. And I think just seeing that McKinsey, this place that is widely regarded as the best consulting firm in the world, and the best of American capitalism, just be so amoral and so unwilling to introspect about the effect that its work was having on the world in helping ICE expand and become more of an organization terrorizing immigrants and deporting more people who I think are doing nothing wrong by being here basically, I think that was just really eye-opening for me. And then Trump's election, more broadly, this neoliberal center-left candidate, and Hillary Clinton just could not beat this guy who was just so obviously unqualified and odious. I think another big piece was economic leftism is actually pretty popular, and Bernie Sanders would have won in 2016 against Trump, I believe. And I think all that coincided to push me towards the Left.

[promo]

SPENCER: It's interesting to think about how to model the behavior of large companies like McKinsey, but also any large company. One way that I like to think about it is that there's founder-led companies where you really have to model the person running it in order to understand their behavior. Like Elon Musk could literally decide to do something for Tesla because he thinks it's a good idea, and they'll just do that. So to understand Tesla, you have to understand Elon Musk to some extent; whereas, I think a lot of companies, they kind of lose their identity, and you can more think of them as profit-maximizing entities or revenue-maximizing entities, or something along those lines. That doesn't mean they maximize it perfectly — they're not perfectly rational — but the way to think about their behavior is just what's going to make them money, or what's going to grow their stock price, or whatever. And then when you think of it that way, you start thinking, if we assume that companies are going to behave this way, that gives us more insight into when companies are good at solving problems versus when they make problems worse. And it hinges on the question of, can you make money by helping people, or can you make money by harming people, or can you make money in a totally zero-sum way where there's no net benefit. And that's going to tell you a lot about whether companies are good or bad.

GARRISON: Yeah, I think that makes sense. And a big point of disagreement with the Left and people who defend capitalism is, to what extent profit-seeking leads to good outcomes. And I think some people will say it never leads to good outcomes. Others will say it mostly doesn't lead to good outcomes or will just focus on the many ways in which it doesn't lead to good outcomes. I think the minimum claim I would make is, people who think that profit-seeking always leads to good outcomes are just clearly wrong. There's a ton of market failures. You've actually written a good blog post about ways in which voluntary exchange can lead to bad outcomes for people. And I think leftists are just really focusing on those failures and coming up with policy solutions to them.

SPENCER: Right. If you think about it this way, then the behavior of McKinsey — while it's shocking to see because it's coming out of a particular person's mouth and there's something that feels really icky about that — on the other hand, it's completely what you'd expect, right? This is a big money-making enterprise, and even the individuals in power there, in a certain sense, are a cog in a money-making machine. So unless they're the founder CEO who owns most of the shares, if they're not trying to make money, others are gonna get angry at them and kick them out.

GARRISON: Yeah, and McKinsey is a bit weird because it's a partnership. It's not just a for-profit publicly-traded company. It's a for-profit partnership, and the partners share directly in the profits that are made. And because of the way it's governed — at least, when I was there — it only took one partner, basically, to decide to take on a project, and if they could staff that project, they would do it. So that leads to people doing a bunch of things that I think are bad for the world. I think I was naÏve going into that job and it's a little embarrassing at this point to think that they were anything but that. I think they did a really good job of selling themselves as, "We are doing well by doing good," and were not just like Goldman Sachs, amoral sharks trying to maximize money. But in practice, I don't think there's a meaningful difference.

SPENCER: Changing topics. There's a big area that people on the Left haven't really had on their radar, but that you think is gonna have really big implications, and that people on the Left should care about it a lot more, and that's the future of AI technology. Why don't we start discussing why you see this as something that people on the Left and socialists should be much more aware of?

GARRISON: I'm just starting from the premise that artificial intelligence is a really big deal and will become an even bigger deal as the technology becomes more capable. This is obviously the direction that society is moving in — taking this stuff seriously. The release of ChatGPT a year and a half ago really was a big kickstarter of that. And then I think on the Left, there's just not as much of a focus on technology as there should be. I think that there's some kind of aesthetic thing where the people that really care about technology and focus on it a lot are just not leftists, by and large, and they often are hostile to the Left. I think the Left associates this with San Francisco and Silicon Valley and sort of pattern-match people who are worried about artificial intelligence into people who are 'political enemies of mine.' I think it's interesting because, if you look at the situation right now, you have a bunch of unelected capitalists building AI systems with the ultimate goal of replacing human labor. Artificial general intelligence, the explicit goal of OpenAI and Google DeepMind, is a system that can do most cognitive tasks that humans can do but better, and this would just be a very good labor replacement. And so it's interesting that the Left is not more keyed into this and there's a lot of reasons for that. Some of it's just skepticism that this technology is possible or imminent. And I think some of it is just that the handful of people on the Left who have been into this idea for the past few years, have just been very skeptical of AI's capabilities and progress. But I don't think this actually represents the modal democratic socialist viewpoint on this, or how people would think about it if they were presented with a pretty honest description of the situation.

SPENCER: You've described the debates around AI as being three-sided. What are the three sides as you see them?

GARRISON: There are people who think that artificial intelligence poses an existential risk to humanity, as in, it literally could kill everybody or permanently disempower humanity, such that humans are no longer in control of what happens on planet Earth. This is the AI safety crowd or the x-risk crowd, and a lot of these people come from the Effective Altruism community. But it's far bigger than that and precedes it as well. And then there are people who think that artificial intelligence does not pose an existential risk, and that even this idea serves to hype up these products and the technology masking the ways in which it actually is really bad, like it's not that smart and is biased and will make shit up. And this is the AI ethics crowd, people who are focusing on the immediate existing harms perpetrated by AI systems and how they're deployed. And then there's a third camp, which is AI is amazing, and it won't kill everybody. It'll actually make everything better, or on balance, be really, really good, and we should build it as fast as possible. These are AI boosters. A particularly extreme version of this is the effective accelerationists, the E/ACCs, if you've seen that on Twitter (E slash ACC). And they go even further than that, and have this complicated argument based on thermodynamics about information ordering itself in higher levels of complexity or something, and say that we should hurry this along and oppose any regulation of AI, and we should laud the creation of smarter-than-human AI. And the founder of E/ACC has said things like, "It would be fine if AI replaced us," because he just wants smarter things to exist. And AI, if you get to some level, it's theorized that it will become smarter than humans, and we should cede the future to them. Larry Page, one of the founders of Google, and Richard Sutton, one of the pioneers of reinforcement learning, hold a view kind of like this, and I opened my recent Jacobin article with quotes from them.

SPENCER: We've seen a lot of tension between the AI safety people and the AI ethics people. For example, you can see them fighting with each other a lot on Twitter. What's going on there? Why is there so much tension?

GARRISON: I think there are a lot of reasons for this. A big one is that the AI safety view requires you to think that AI is now, or will be, very, very capable and smart. For AI systems to pose a threat to literally all humanity, they would need to be really, really, really good, at least in some narrow sense. They'd have to be very, very smart and capable. And then the AI ethics crowd is more focused on the ways in which AI systems fail, the ways in which they're biased, the ways in which they hallucinate or confabulate information, the ways in which they're brittle, and how we just don't really understand them. And it's making this argument that these systems are just not nearly as good as the company is making them out to be, and they're not as good as the AI safety people are making them out to be. And so by emphasizing the capabilities of these systems — which happens on the AI safety side — a lot, the ethics crowd feels like the AI safety community is carrying water for these companies. And in some cases, they'll say, "Oh, they're hyping up their products." Like Sam Altman, when he talks about AI killing everybody, he's really just hyping up OpenAI's products because, if their products can kill everybody, that means they're really, really capable. It means maybe you should invest in them, or buy their products, or whatever it might be. And then I think there's also just a pretty big vibes clash. The AI ethics crowd is many more women of color. It grew out of machine learning as a field, and is fairly left-wing, not necessarily socialist, but broadly progressive. And then the AI safety crowd grew out of philosophy. Eliezer Yudkowsky, the primary founder of the field, dropped out of eighth grade, I believe, and, no, he's not very friendly to the Left. I think there's a lot of interpersonal dynamics happening here, where people on the AI ethics side had bad experiences with people on the AI safety side in the Bay. And it's true that, sometimes, people on the AI safety side of things can be not the most pleasant. And then there's this other big piece, which is, if you think that AI could kill literally everybody, that dominates other considerations. If you think that there's a one percent chance that it ends the world in the next 50 years, there's a very strong case to be made for marshaling tons of resources to solve that problem, and it swamps any other problem with AI. And so people see it as taking air out of the room. A common phrase is that 'the x-risk narrative dangerously distracts from the existing harms that AI poses.' And I think there's something to this, just in the sense that it does really dominate your thinking if you start to take this super seriously. And I think, in practice, AI safety people do prioritize AI safety over other issues and other policy approaches to a larger degree than other groups.

SPENCER: Would you say, on the AI ethics side, they tend to think of the AI safety people as being more disingenuous, like, "They're not really concerned about AI killing the world; that's just what they say"? Or do you think they view it as more ridiculous? Like, "Oh, they really believe it, but that's a crazy thing to believe. That's not really the danger. The danger is all these immediate effects, or near-term effects, of bias and hallucination and unfairness," et cetera?

GARRISON: I'm sure it varies by person. When Sam Altman or some other AI executive says that AI could kill everybody, I think a lot of ethics people are wary of that and might think it's disingenuous. I've also talked to people who have said, "No, these people are earnest and well-intentioned. They're just mistaken." For some people, the idea that AI can kill everybody is like believing in ghosts or something, and so they might think they're like useful idiots for the boosters or something.

SPENCER: Do you think there's an element of thinking of it as a privileged position? Instead of thinking about the way that people are being negatively impacted now and real lives being affected, you're thinking of this abstract future scenario?

GARRISON: Yeah, I think that's definitely a critique that I've seen leveled at AI safety where people right now are being harmed by these systems. There are famous examples of machine vision systems not recognizing children or people of color in self-driving situations, or mislabeling Black people as gorillas. And I think there are examples of discrimination in automated welfare decisions and workplace surveillance and all kinds of really not good things that are happening now; whereas, the idea that AI will become smarter than people and then disempower them or kill them all, is inherently speculative. The AI safety crowd is much more male, much more White, people who have largely (I think) not been in touch with activism communities or progressive communities before. And I think an argument that I've seen is like, "You only care about this because it could affect you. You don't care about these other problems because they only affect people who are not like you."

SPENCER: Do you think these groups are fundamentally at odds, or is there a lot more room for cooperation between them?

GARRISON: I argue in the Jacobin piece I just wrote that there is actually a lot more room for cooperation, or at least they have more in common with each other — this being the AI safety and the AI ethics groups — than they actually have with the booster or effective accelerationist groups. And the argument I make is that you'll see this divide happening in op-ed pages and on Twitter between safety and ethics, and in reality, the amount of money being spent by the ethics and safety crowds on civil society and lobbying governments, is orders of magnitudes smaller than the amount of money spent on improving capabilities of AI systems and lobbying governments to have favorable regulations or lobbying against regulations. And there's a far more significant divide between the trillion dollar companies racing to make AI systems more profitable, and the comparatively poor civil society groups trying to make AI actually reflect human values, because profit is not a great metric to be seeking when you're building systems that have risks like this, and the ethics and safety crowds are both just trying to inject humanity and broad values into these systems. That may or may not jive with maximizing their profitability.

SPENCER: Do you see a way that these two groups could cooperate in practice? Because it seems like what actually happens is, they tend to focus on their differences and get into heated debates about those differences, rather than saying, "Okay, well, we both want the acceleration to be not at the forefront. We want safety to be at the forefront. We're just focused on different aspects of safety, but maybe there's commonalities there. The better we understand these models, the better that helps both AI ethics and AI safety. The better we are able to monitor these systems, again, the more that helps both of our goals."

GARRISON: Yeah, I think there are a few points of clear agreement between the groups. There have been prominent people in both camps arguing for better whistleblower protections for people at the AI labs. And just yesterday, there was a whistleblower from Microsoft who went public with concerns about Copilot generating images that violate their policies, and this has been known for many months, so that's one. Imposing liabilities on the companies making AI models if those models do harm in the world, is something that people from the safety side of things have argued for, and people on the ethics side of things have argued for. It just increases the costs on these companies. It's not something that companies would rationally want. It doesn't necessarily have the same kind of dynamics of kicking the ladder out from under them, preventing other companies from joining the top few. And then I think there's been a lot of focus on the idea of a licensing regime. Any company trying to make an AI model above a certain size — like GPT-4 level or bigger (let's say) — would have to get it approved in advance by the government. They would have to institute strict evaluation and testing processes and share information with some government regulator about the model. And this is something that has been pursued and proposed by the AI safety crowd, by and large. And some people on the ethics side of things have argued that this will just lock in the existing top players because only they can afford to actually comply with these regulations, and so this is argued as a form of regulatory capture. I think that the specifics of the proposal really matter. And when you're talking about systems as large as GPT-4, they spent over $100 million training GPT-4. And so if you have to already spend that much money, it's hard to say that small startups are being cut out of this. If you're spending over $100 million on training an AI model, the idea that you cannot then afford to comply with regulations that require testing, evaluations, cybersecurity practices, etc., just doesn't really scan to me. One argument that is made is that, as compute gets cheaper, or as the algorithmic improvements come down the line, you can have smaller and smaller models that have smaller levels of capability, and that any licensing system would have to scale with that and affect more players, and prevent more people from competing here. And then there's another tension around open sourcing models. This is pretty complicated, and it doesn't fall on obvious lines, but some people on the safety side of things really worry that if you open source powerful AI models, anybody can remove any safety fine-tuning on those models very easily. This is definitely true. Once a model is open sourced, you can just very quickly remove any of its safety features, and then it can be used to make bio-weapons or do other kinds of harm. A lot of people on the ethics side come from academia and have a very strong open science, open source position. And are you against this? There are people on the safety side of things who are generally pro open source, because it can help you do research on these models, and it takes some of the power away from the biggest players, because right now, if you want to work on a cutting edge model, you have to go work for one of the top labs, even if you're doing safety research; whereas, if you had open source models at the level of GPT-4, more people could be doing safety, interpretability research, etc., without having to go work at the labs. And then there's a strong case to share the benefits of these technologies widely, provided that the risks are not too high. And I think some people say right now, it'd be fine to open source these models, but for future models, they'll be too risky. If you have something that literally could take over the world, having everybody have the ability to do that, doesn't seem like a stable system.

SPENCER: When I talk to people working on AI capabilities, it seems to be one of the weakest points in their arguments around safety, broadly construed — whether it's AI safety, or safety, as in reducing concerns about ethics — is that they say, "Yeah, we're going to push forward this technology, but we'll put these protections around it." And it's like, "Well, okay, but if you're publishing your papers and how you did it, or you're open sourcing your model, someone else is going to remove the safety immediately. So how exactly does that make a safer world? I'm kind of confused by that cause and effect." It seems to me they don't have a great answer for that.

GARRISON: Yeah. There could be something like, if you make it safer at the training process... If you were worried about a model having the capacity to help make a bio-weapon, and then you just removed any kind of information related to biotechnology from its training data set, you could remove the safety features from that model, but it probably wouldn't be able to make a bio-weapon. It also decreases the usefulness of the resulting model. So there's going to be incentives against doing this. But I think Meta — which is the big company most associated with open sourcing these models — has argued that they have these safety features. But literally, within a day or two of the model — Llama two, their latest model — being released, people removed the safety features and had a model that was willing to say anything, literally completely jailbroken, not even playing a character, but just will say any racial swear, will tell you anything to make a bomb or a weapon or whatever it might be. It's not a huge deal because the model's not that good. But the fear is that, if you have a model that could actually fill in the gaps to teach a relatively novice person how to make a new bio-weapon, then once it's out there, it's just out there. You really cannot call it back.

[promo]

SPENCER: If we go back ten years, it seems like a lot of cutting edge AI work was being done in a more academic context. But now that we have these extremely expensive cutting edge models, it might take, as you say, something like $100 million to train. It's very hard to do that in a pure academic setting. Maybe an academic could collaborate with Google or Meta, but more and more of this research, it seems, is going to be done, and is already being done, within a for-profit context. How do you think of that as changing the dynamic of how AI safety has to go?

GARRISON: I was just in the Bay and talking to a bunch of people working on AI, and some people were talking about moving from academia to the labs. It's like, "Yeah, you'll do the same work. You're just going to add a few zeros to the scale of what you're doing." And the scale just leads to really improved capabilities of these models, and so it's really appealing for a researcher. I think it's not a great situation right now, because you have many of the best AI safety and capabilities researchers in the world having worked at the labs or currently working at the labs, and so they have a financial and social stake in these labs succeeding, even if they're worried about the risks involved. Something I've been thinking about, and others have proposed this before me, is having something like a CERN for AI, an international consortium of governments that work together and pool resources and talent to do research on AI and AI safety. And I think this would be good for a bunch of reasons, and have fewer profit motive considerations going into the research priorities. It would also help with international collaboration instead of competition. Because if the best AI team in the world is at this international consortium, and it has the US and China and European countries all contributing to it and getting benefits from it, I think that's a lot better than having labs within specific countries that are for profit, that are competing with each other. There are arguments against this. If you consolidate the top talent in one place and give it tens of billions of dollars like a government project would get, you might also accelerate your AI timelines, and that could increase risks for other reasons. But I think it's worth exploring, and it's creating a public option for AI because right now, there are just very strong incentives for any researcher to go work at one of these places.

SPENCER: What do you think the incentives are around governments getting involved in trying to build these highly intelligent AIs?

GARRISON: I think it's interesting. Some people, when I spoke to them for the Jacobin piece — like David Chalmers — said, "I think in a socialist world, you would still have strong incentives to build AGI." I think that might be true. But I think firms, companies, have a much stronger incentive to build AGI than governments, basically because firms are trying to maximize profits, by and large. And AGI, if you could make it, would probably be a lot cheaper than having human workers do the same tasks, and you could scale it up much more and run it much faster. That's a great way to decrease one of the biggest input costs into any kind of economic production. And profitability is just revenue minus cost, so if you can decrease that cost a lot, there's a very strong pull to do that; whereas, governments are trying to optimize for a bunch of different things at once: stability, economic growth, geopolitical or military advantage, popular support, international respect, etc. And AGI would help in some of these things, like economic growth would probably radically increase, but it would probably have negative effects on stability. The idea of 100% year-over-year growth is something people have theorized in an AGI world, and that's really hard to wrap your head around, and would have really, really big implications for society. I'm not convinced that society is ready for that. And job loss is another thing where, if your citizens are losing their jobs at a really high rate, and your welfare state is not set up to accommodate that... The United States welfare state is, you have to try and find a job to get any money. It's not set up to handle 90% unemployment, even if there's more wealth than ever. And so that's a real problem. Because of that, I think there's just less of a strong incentive for governments to build AGI. In practice, the forefront of the field, the most capable general models, are coming from these private labs, and governments are not in the game at all, as far as we know. China's taken a stronger interest in AI for a longer time at the national, industrial, and policy level, but I've seen arguments that they're more interested in things like machine vision and facial recognition, natural language processing, than AGI. And I think that's for similar stability reasons where, if you have a surveillance state, having better machine vision and facial recognition might be more helpful to ensuring that stability of that state for longer than a system that just replaces all labor.

SPENCER: One thing people associate with democratic governments is that they tend to move slowly on things. It's not always true, but it seems to be often true. In the fast-moving, dynamic world of AI, do you think it's realistic that the government could stay on top of it?

GARRISON: I think this is a super common concern people have. I don't have a really firm view on this, but the US government, let's say they could establish a new regulatory agency that has more flexibility and capacity to respond to an evolving landscape and an industry that's moving very quickly. For example, if you had a compute threshold for some kind of testing requirements, like the Biden executive order, I think it was ten to the 25th FLOPS, which is floating operation points per second; this is probably more than GPT-4 was trained with. If it turns out that the most capable models were able to be trained with substantially less compute, they need to issue a new executive order to require testing on those models; whereas, if you had a regulatory agency that was just empowered to be like, "All right, at this level of capabilities, you could codify those capabilities." The government requires you to do these evaluations and red teaming tests and cybersecurity practices, et cetera, and this agency will inspect your work and make sure it's being done. And I think that's the kind of solution that could work better, but I've not spent as much time as I would like on the policy side of things yet. I don't think it's broadly the case that the government can never keep up with the private sector. I think there are just examples of these agencies that have some degree of autonomy and flexibility in responding to evolving industries.

SPENCER: Is there an example you'd point to where you feel like the government — it doesn't have to be the US government, but just government in general — has done a good job of keeping up with new technology and adapting to it?

GARRISON: I knew this question would come. I don't have a great example off the top of my head. I think nuclear energy is an interesting one, where it's not necessarily moving really fast, but Cory Doctorow, the author, I saw him speak a few years ago. This was right after the Mark Zuckerberg Senate hearings where the senators were asking very embarrassing questions, basically looking very dumb, and unable to regulate social media effectively. And Cory said in response to this that the government can regulate complicated things like nuclear energy by calling in a bunch of experts and delegating decision-making to people with more expertise. And nuclear is an interesting one because in some sense, it's over-regulated, or at least I think the optimal level of nuclear power is far higher than we have right now. But it's also extremely safe in the United States. It's never killed anybody. Three-Mile Island was the biggest disaster, and even that did not kill anybody. The FAA in the United States has been very good at regulating airlines, and the US has very, very high air safety. People argue that it's overly regulated or something. But I think they're just examples of the US government making things that are inherently somewhat risky, just much, much safer, through effective regulations. And again, this might hurt innovation, or it might be too much. I don't have a strong view on it. But if you're just trying to make AI safer, if you think AI is inherently really, really risky, I think the US is capable of coming up with regulatory approaches that would increase safety.

SPENCER: Yeah, nuclear is such an interesting example because I think a lot of people would argue that they made it safe by basically making it impossible to build. They raised the cost so high that it can't really be a feasible technology,

GARRISON: But there are nuclear plants in the United States, and they've been running for decades without incident. So maybe they should be building more new ones, but it is just the case that nuclear is providing some amount of power and not leading to harms for people.

SPENCER: That's true. By the way, Cory Doctorow was a guest on this podcast. If people want to check out what he says, you can listen to episode 135.

GARRISON: Oh, very cool.

SPENCER: Given that a lot of the latest AIs are being built in a for-profit setting, how do you view profit seeking as affecting the risk that we're going to face?

GARRISON: I think there are some theoretical arguments you can make, and then you could look at what's happening in the industry. A rough model I've come up with for profit seeking in AI is, more capable and more agentic models are more profitable, all else equal. And safety efforts tend to result in an alignment tax. The efforts you spend making a model safer could be going towards capabilities or getting the model out the door faster, or whatever it might be. A consequence of this is that profit-seeking firms will make models more capable and agentic at the expense of safety. There's some reasons this might not be true. The first is that inference costs — costs of having the model do something, more capable models — may cut into profitability. So it may just actually be better to make existing models cheaper to serve, rather than making them better at doing tasks. Robin Hanson has argued that a profit-maximizing amount of agency is not full agency. You don't want the models going off and making their own choices about what to prioritize. And then some safety efforts may also make models more useful and marketable. A model with no safety features would probably be bad because of the harm it would cause, and the PR risks. And if you have the model saying racist things, or telling people how to make weapons, that's bad for your company. And so I think that's interesting. I think in practice, Sam Altman of Open AI has said, "We're going to put a lot of effort towards making models more agentic." There's a lot of business value in that. He said that very soon before he was fired in November and, for a lot of people worried about AI risks, agency is a really big part of this. Your model has to be smart and agentic to pose a risk, is a lot of people's view. So efforts to make these models able to act more on their own and carry out longer-term plans and affect the world more directly, that could increase risks.

SPENCER: Before we wrap up, I wanted to ask you about this phenomenon we've seen where you have these seemingly mission-driven founders creating an organization, saying they're doing it because they want to help the world, and then it ends up getting subsumed into this massive for-profit entity. We've seen this with OpenAI which started as a nonprofit, and then it became a for-profit, and then now we had a shakeup with the nonprofit board that's supposed to control the for-profit, raising questions about, "Is it really controlled by a nonprofit?" We see this with other AI companies that, I think, feel pressure to get a big corporate partner because they just need it for the compute, or they need it for the money. So even if they're technically mission-driven, now, they're merged to some extent, with some massive for-profit entity. How do you see this as affecting how we should think about AI safety and what kind of strategies might be best?

GARRISON: It is pretty crazy. You just have, over and over again, people who are idealistic. They really believe AGI is possible, has risks, but could be really, really good. And they start mission-driven organizations, and then they become subsumed by trillion dollar companies, and they compromise their founding commitments, as you mentioned. I think there's this very strong selection effect that's happening, where you have the most ambitious and optimistic people starting these AI labs. And even if they're convinced that the risks are real — I think Demis at DeepMind and Sam at OpenAI and Dario at Anthropic, are all convinced that AI really is an existential risk — they still are systematically going to be more optimistic about building it, and more ambitious, by virtue of the position that they're in. And I think there's also this idea that, "Well, somebody's going to do it, and I trust myself and my team, the people around me, more than I trust our competitors to do this in a safe way." And all these people can be well-intentioned and acting this way, but then, in aggregate, they're just ratcheting up competition and increasing the race dynamics that make it more likely that people will cut corners on safety. ChatGPT, its release led to a code red within Google, and they recalibrated the risk appetite and rushed to release Bard, their competitor, over internal opposition. This is just a direct example of the actions of one organization, OpenAI — which is more idealistic than Google — directly changing Google's risk appetite and risk tolerance. There's this quote I have in the Jacobian piece from Dan Hendrycks, who started the Center for AI Safety, where he says, "Cutting corners on safety is largely what AI development is driven by." I actually don't think, in the presence of these intense competitive pressures, that intentions particularly matter. There's a lot of focus on, what does Sam Altman believe, what does Demis believe. I don't think that's irrelevant, but I think if you're focusing just on that, you might miss the overall effect of the competitive dynamics on the risk landscape.

SPENCER: That touches on what we were talking about before, where you sometimes have a founder-led organization, where you really have to think about what that founder believes, in order to understand the organization. But much more commonly, when you get to really large organizations, a more accurate model is not 'what does the founder believe,' but how do they make money, or how do they believe they're going to make money, and then they're just going to do that. And so as you transition from one to the other, you start getting a very different set of behaviors, and it matters less and less who's in charge.

GARRISON: I think that's right. In the case of Demis Hassabis, he's a CEO of DeepMind at Google, but he's the head of a division within a for-profit company that's worth over a trillion dollars. And he just has fewer degrees of freedom than Sam Altman or Dario Amodei — OpenAI and Anthropic — so his intentions don't matter quite as much. And I think it's also that people can say the right things like Nathan Labenz on the 80,000 hours podcast talked about it being really good that all these lab leaders are talking about safety and concerned about existential risk. I think it's better than them not having these concerns, given that there's some chance these risks are real. But I think those rhetorical commitments, or being convinced of it on a personal level, it's one input into a very large complicated system that has a lot of things pushing the direction of releasing stuff faster and making things more capable and doing a suboptimal amount of risk mitigation and safety.

SPENCER: So if people who are worried about AI, not just being biased and hallucinating, but actually posing a threat to the whole world, are right and it really does pose a threat, how are for-profit entities going to view that kind of threat? Are they even the sort of entity that could possibly take into account that threat properly?

GARRISON: There's an idea I've been playing around with, which is to a corporation — public, shareholder value-maximizing corporation — bankruptcy and extinction look similar. So a profit-maximizing corporation's downside risk is bounded at zero (i.e. bankruptcy), but its upside risk is unbounded, so there's no theoretical limit on the profits it can make. From the perspective of Google, the corporation, bankruptcy in 20 years looks roughly the same as human extinction. Now obviously, Google's CEO and its board and its shareholders will see a distinction between the extinction of the species and the extinction of only the Googler, but because we've systematically decided to organize most of our economy around shareholder maximization, it's really difficult to properly price extinction risk into a profit maximizer, and existential risk mitigations are also what's known as a global public good. The benefits are widely distributed but the costs are borne acutely, and markets systematically under-provide these things. It's a fun little idea and provocative, but I think there might be something to it, just in terms of how these companies all work towards goals that maybe no individual person would choose, but because of the way we've set them up, that's just what's going to happen.

SPENCER: Garrison, thanks so much for coming on.

GARRISON: Thanks for having me. It's been fun.

[outro]

JOSH: A listener asks: "You seem very self-aware. How much time in a day or week do you spend examining your behavior and speech? And have you always been this way?"

SPENCER: I don't make dedicated time. It's not like, you know, once a month I'll spend two hours analyzing my own behavior or something like that. But I do try to make thinking about my behavior a regular occurrence. For example, when I make a mistake, or I feel like I've let someone down, or I feel like a project didn't go well, I really will try to learn from that. I also try to get feedback from people. So for example, with my team that I work with, periodically I'll send them out an anonymous survey and ask them to critique me. I ask a whole bunch of questions trying to get them to critique me in different ways. And I found that just incredibly valuable over the years, creating that feedback loop. Also with my essays, I feel like I learn a lot and I'm able to grow that way because I put my essays out in the world and get comments, suggestions, criticisms. And what I try to do after I get that feedback is I'll update the post quickly. So in the first few days of putting something out, let's say on Facebook, I'll get all that critique coming in, I'll work on improving it based on that critique, I'll thank the people that gave me good critiques that I want to use to improve it, and then I'll have a better version but my ideas will be upgraded as well.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: