Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
April 10, 2026
Could AI trigger an economic break as large as the Industrial Revolution, or even larger? What changes when labor stops being the main bottleneck in production? If intelligence becomes reproducible like software, what happens to the structure of an economy? How should we think about a world where capital captures what labor once did? Does faster growth necessarily mean better lives, or only more output? How should economists model an economy when software begins to substitute for minds? Are current production functions adequate for a world of autonomous systems and robotics? Why do small shifts in annual productivity matter so much once compounding takes over? How much of AI’s impact depends on cognitive automation alone versus full physical automation? When does automation reduce labor demand, and when does it make human work more valuable? If AI does part of a job better, does that destroy the profession or increase demand for it? Under what conditions do humans remain complements rather than substitutes? Could an AI boom create a recession before it creates abundance? What happens to aggregate demand if white collar workers lose income before productivity gains diffuse widely? If the economy can produce more than ordinary people can afford, who is it really producing for? How quickly can consumption patterns shift in a world of extreme concentration of wealth?
Anton is a Professor at the University of Virginia, Department of Economics and Darden School of Business as well as the Faculty Director of the Economics of Transformative AI (EconTAI) Initiative. He was named to the 2025 TIME100 AI list of the most influential people in artificial intelligence. He is a Nonresident Senior Fellow at Brookings and the Peterson Institute, a Research Associate at the NBER, a Research Fellow at the CEPR, and serves on Anthropic's Economic Advisory Council. His research analyzes how to prepare for a world of transformative AI systems. He investigates the implications of advanced AI for economic growth, labor markets, inequality, and the future of our society.
Links:
SPENCER: Anton, welcome to the Clearer Thinking Podcast.
ANTON: It's great to be on air with you.
SPENCER: Will AI be as big as the Industrial Revolution was?
ANTON: In many ways, yes, and in some ways even bigger, I would say.
SPENCER: That's a pretty shocking claim. Can you give us an idea of what the Industrial Revolution did and how it changed things?
ANTON: Yeah, I would say, from an economic perspective, the main thing about the Industrial Revolution was that it moved our economy from being based primarily on land to being based primarily on labor. Before the Industrial Revolution, the vast majority of people worked on tending the land and growing the food that we needed to eat. After the Industrial Revolution, the main part of the economy was built around people working with machines to produce goods, and that made us a lot more productive. It made us a lot wealthier.
SPENCER: Can you describe what AI might do in those terms?
ANTON: I think what AI is going to do is going to be an equally big transition. Right now, about two-thirds of all the output produced in the economy goes to pay labor, and AI promises to be a substitute for that labor. It promises to be able to do many, if not all, of the things currently done by humans. If it succeeds at that, then it may suddenly turn our economy into something that's based much more on capital than on labor, and it may shrink the labor share quite significantly.
SPENCER: Could you explain that distinction between capital and labor? How is that defined?
ANTON: Yeah, so if you look at how output is produced in the economy, you have what we economists call the two main factors of production: labor and capital. Labor is us, the workers; capital is everything else. It's the machines, the factories, cars, and so on. Right now, labor takes home two-thirds of the total proceeds that are produced, and the capital gets paid using the remaining one-third. If AI can substitute for more and more of the labor, which is the declared goal of some of the leading AI labs, then that capital share is going to grow because AI, robots, and so on are part of capital, and the labor share is going to shrink.
SPENCER: It reminds me a little bit of chemical reactions, where, if you're trying to create a chemical reaction, if you run out of one of the enzymes or factors first, that's the thing that limits it. It sounds like capital becomes the limiting factor in a world where labor is done by AI, is that right?
ANTON: I think that's a good way of looking at it.
SPENCER: I'm not an economist, but my understanding is that there are these equations that relate growth or productivity to capital and labor. Could you talk about how those get combined into productivity and what productivity means?
ANTON: We economists try to describe the whole economic system like you described a chemical system. We try to write down so-called production functions that tell you, if you take a given amount of capital and a given amount of labor and combine them, how much output can you get out from that? Traditionally, since the Industrial Revolution, labor has been the limiting factor. You can hear this oftentimes when you talk to business people; they say, "Oh, if we could only find more people, if we could find more talent, then we could expand the output. We could produce more." That's true at the economy-wide level too. If you can substitute for workers with AI systems, then we may actually be able to do that. We may be able to produce more, but not by using additional workers, just by using additional machines that get plugged into this economic production function. The big question is, how will this production function change? How will the mechanisms of our economy change if we have AI systems that are as powerful as many in the industry predict? The quick answer is they may change quite fundamentally. They may be able to produce a lot more, and they may make us a lot wealthier, but they may also reduce the role of labor in the economy.
SPENCER: We'll definitely dive into those details, but before we do, can you talk a little bit about what the output is that economists are talking about, what they mean by productivity, or they talk about growth. What are they actually measuring?
ANTON: So they are measuring the total value of the goods and services that the economy produces. Or, technically, it's called GDP, gross domestic product, and that's a measure of how productive the economy is, how many useful things the economy is producing. Broadly speaking, although there are some important distinctions, it also gives us a measure of the economic welfare of the nation.
SPENCER: One critique sometimes levied at economists is that GDP may not track what we fundamentally care about. To what extent do you think that's a fair critique versus not on point?
ANTON: It's totally fair. It doesn't capture a lot of things that we may care about. For example, you can't measure things that are really important to us, like love. You can't measure things like unpriced resources or a beautiful environment. Sometimes we actually include things in GDP that are purely defensive expenditures. So, if there is more crime and we need to hire more security guards, that actually adds to GDP, but I don't think we would argue that makes us better off. So yeah, it doesn't track that perfectly, but at the same time, if GDP from one year to the next goes up or down, that still gives you a good indicator of whether the economy is doing better or not, because all the other factors don't move that quickly.
SPENCER: You could imagine two societies that are exactly equivalent. They produce the same amount of GDP, but one of them, everyone's miserable because it's just a really psychologically unhealthy culture where everyone thinks that they're worthless unless they work a lot, and they can never be good enough. In another culture, everyone works motivated by love, connection, and happiness, and they don't produce the same GDP, but one is a bad culture and one is a good culture. So clearly, it's not all that we care about. But all that being said, some people have argued that historically, it has been a really good measure related to human welfare. Do you think that's fair?
ANTON: I think that is fair. So, big picture, I'll go back to the beginning of the Industrial Revolution. Back then, people barely had enough to eat. The average person would frequently experience spells of starving. Nowadays, I think that is not true for the vast majority of us, fortunately, and that is, for example, something that is very much reflected in the GDP statistics, in the growing material welfare of our nation. But yeah, I think it's important to be clear and upfront that GDP is not everything.
SPENCER: So it's not everything that matters, but historically, increases in GDP have often led to increased welfare. I think that's fair to say.
ANTON: Being materially better off is probably a necessary condition for higher welfare, but it's by no means sufficient.
SPENCER: There's also the question of how that money is distributed. Because GDP per capita is not saying anything about the distribution. In theory, although it would be hard to achieve, you could have a society where one person has all the money. Everyone is a slave to this one person, and it could have a lot of GDP, even GDP per capita, but clearly it would be a terrible society. So how do you think that comes into play here, in terms of just the distribution of GDP?
ANTON: That distribution for the average person matters a lot. Because if, let's say, we make fundamental breakthroughs in AI and all the benefits just go to five people, then the average person will not benefit from that. In some sense, it would make absolutely no difference for them whether we have this amazing technology or not, but I guess what the promise of the technology is that if it can produce so much more, if it can make the economy so much more productive, then it creates the potential that everybody can benefit to some extent. Maybe some are going to benefit a bit more and others are going to benefit a bit less. But ultimately, I think it would be a huge failure if we develop really amazing technology, if output goes up by orders of magnitude, and there are people who are actually made worse off from that. I think that would be a very sad state of affairs. It's like imagining the Industrial Revolution. It has made us 20 times richer on average, but imagine some people that are clearly worse off than before.
SPENCER: So when we think about the effects of AI economically, it seems like we can break it into sort of three questions. One, will AI increase GDP per capita? We can go into that, but my guess is that almost every economist, or almost everyone, would agree it's almost certain to do that. Two, will it make the distribution of economic benefits worse? Will it make things more unequal as it raises GDP per capita? And then the third is, will other values get destroyed in the process? Because, as we've talked about, GDP is not the only thing we care about. What if it led to an authoritarian state? Even if it increased GDP, that could be terrible. So on that first question, GDP per capita, is it essentially a consensus view that AI will increase GDP per capita, or are there some people who say, no, maybe not?
ANTON: There's a lot of debate. It's actually hugely contentious. For example, the Nobel laureate Daron Acemoglu predicts that over the next 10 years, AI will add just 0.07% to GDP every year. So essentially nothing. It's a rounding error.
SPENCER: Is there anyone who thinks it's going to be negative, or is the question of whether it's going to be zero or positive?
ANTON: Good question. I have not seen anybody who says that it's actually going to be a negative contribution. It's a hard case to make.
SPENCER: I guess if he's right, then it'll be a negative.
ANTON: I guess that's right, and would be minus 100%, though not necessarily. It could actually be that GDP still continues to grow, just, yeah, without humans.
SPENCER: Well, I guess if there are no people, I don't know what per capita means.
ANTON: That would be infinity.
SPENCER: Yeah, infinity. There you go. Okay, so there is a debate that actually surprised me a bit, that some people think it will be so low. What's the case, before we get into the case that it might be high, that it will increase GDP per capita a lot? What's the case that it won't increase it, or barely will increase it?
ANTON: Yeah. So basically, the way that you estimate that impact is you look at what fraction of tasks in the economy is going to be affected by automation, by AI, then you look at how much of a productivity gain in each of these specific areas of applications you're going to expect.
SPENCER: And could you just explain that? What does that mean?
ANTON: That means basically, how much more output for a given dollar of input can you get? So, let's say, right now you pay a consultant $100,000 and you get a study that tells you how to reorganize your business. And let's say the AI can do that for $5,000; that would be a 20x productivity gain, because you can do the same thing for 95% lower cost. So if you multiply those two numbers, then you get the expected productivity gain for the economy. If you lowball both numbers and multiply two small numbers, you get a very small number as a result.
SPENCER: I struggle to see how you could get such small numbers. I mean, I'm already with Claude Code, there have been just side projects I haven't been meaning to do for years, but I'm never gonna get around to them, because they would take me a month, and then I do it with Claude Code in a few days. Now, of course, that's not necessarily true of everything. Maybe those projects are especially useful for using AI or AI models are especially good for those kinds of projects, but it's still surprising to me.
ANTON: Yeah, so I think the first thing that especially people in white-collar professions need to appreciate is that it's actually not that big of a part of the economy that is engaged purely in cognitive work. Depending on how you measure it, it's like 10 to 20% of the economy. Everything else requires at least some amount of physical interaction, and that means it's hard to automate using solely AI without bringing in robotics and so on.
SPENCER: What are the biggest job categories, things like service workers, people at Walmart, restaurants, and caretaking for others? Are those right?
ANTON: Yeah, nowadays, the majority of the economy is service sector jobs. Within services, you have health care, you have education, and arguably, all of those are going to experience some benefits from AI, but they will also, at least in the short term, still require the human touch and humans in the loop for a lot of things, and that kind of puts a ceiling on the productivity gains. If you want to, you can kind of lowball all those productivity effects. You can say it applies only to a small fraction of the economy, and then if you multiply several small numbers, you get an even smaller number as a result.
SPENCER: And what kind of mistake do you think they're making there? Do you think they're just using overly conservative estimates for the impact on productivity, or are they underestimating what effect, what side or percentage of the economy will actually be affected?
ANTON: Yeah. So in fairness, there was an estimate that was put out almost two years ago, and back then, systems were significantly less powerful. We all know that. And so to give them the benefit of the doubt, if you over-index on what AI can do at one given moment, and you don't keep in mind the full trajectory of how rapidly it's improving, then it's fair to say two years ago that it's not that impactful for the economy, because two years ago, it couldn't do that much. But I think the most important thing is the trajectory and how rapidly it's improving and how much better it's getting.
SPENCER: Yeah, I was looking up something around AI capabilities. I was looking at papers, and I kept finding papers from 2022 and 2023. I was like, "This is completely irrelevant. I actually learned essentially nothing. I need a paper that's at least from 2025, ideally from two months ago, to actually refer to anything here."
ANTON: Yeah. So I think that's the biggest factor. And then I guess you can also make the case that many of the projects, for example, that you are doing in Claude Code now, you would never have performed them, so maybe they were actually more marginal and not that useful for the economy.
SPENCER: Fair, yeah. When you do the analysis, what kind of numbers do you look at in terms of effects on GDP?
ANTON: I haven't done the analysis carefully in quite a while, but I'll tell you. Three years ago, I wrote a piece in which we predicted, shortly after I think GPT-4 came out, that it would lead to 1.5% higher GDP growth per year for the next decade. We distinguished between how much it can automate in the economy, how many jobs it can take over, and how that's going to make the economy more productive. Secondly, it will lead to greater productivity growth because people can use these systems to research new things and engage in innovation, which is also a really important channel. Now that was three years ago. We are in a completely different situation. We have no idea how long exactly it's going to take, but we are probably pretty close to the point where these systems can recursively self-improve. Once they can do that, there are economic models — I'm currently working on one of them, but I don't have my final numbers yet — this may give rise to a growth explosion, and in some ways, those regularities we have seen since the beginning of the Industrial Revolution, where you had 2-3% productivity gains per year, may be completely tossed out the window, and we may see something that's potentially double-digit. The counter case is that there are also bottlenecks in the economy, like what you mentioned before, with the analogy of the chemical reaction. If you need physical actuators but have only automated the cognitive AI, that's going to hold you back.
SPENCER: Give the audience some context on these numbers. Your initial estimate is something like 1%, or 2%. That sounds really tiny, but give us context. My sense is that's actually not that small, even those estimates. What kind of GDP growth do we expect in a country like the US, and how big a deal would even 1% be?
ANTON: Over the past almost 200 years, we have had something like 2-3% productivity growth every year. That sounds tiny. I mean, 2% is really not much. But the thing about it is that it compounds; it's exponential growth. So if you have 1.5% extra growth for a decade, it gives you something like a 16% higher level of GDP. It's like 1/6 bigger, and that becomes noticeable. So, yeah, the thing about exponential growth is small numbers compound over time.
SPENCER: I think the really shocking way to look at this is, if you have one of those graphs that's like GDP per capita over the entire history of humanity, and it looks like this really, really flat thing, and then you get the industrial revolution. It just shoots to the moon. It's almost crazy how much it shoots up. And then we're talking about taking that thing that's already shooting to the moon to the moon even faster. But put that in context, yeah.
ANTON: One of the things was, before the Industrial Revolution, we lived essentially in a Malthusian regime. It means whenever the economy could produce more, population growth caught up, and the average person was again back down to subsistence levels. It was really brutal. All the growth went into more people instead of more welfare.
SPENCER: It's like, if you have a forest and there are a few rabbits, eventually they expand until there are so many rabbits that now the rabbits aren't getting enough food to eat in the forest, and then the population comes down, eventually kind of reaches equilibrium based on the resources. But the rabbits are, for only a little while, well off in the forest, until they've kind of used up all the resources.
ANTON: Uh huh, yeah. And before the Industrial Revolution, we were like rabbits. We expanded until we filled up the available resources.
SPENCER: So how do you get to these numbers? You said, I know you're not done with your calculations, but you said numbers in 20%, which would be enormous. Where do these figures come from? You mentioned self-recursive self-improvement.
ANTON: And maybe I should also point out, in order to reach that we need more than just the cognitive side of AI. We need to automate much more in physical production as well. But yeah, let's walk through this and look at what happens if we have recursive self-improvement, or maybe even a little bit before that. In this latest paper of mine, we essentially look at how multiple forces in the economy can create feedback loops that feed into each other and thereby trigger much higher economic growth. So what are those feedback loops? The first thing is, right now, we live in an economy where we said labor is kind of the bottleneck, but soon, with artificial general intelligence and powerful robotics, those machines are going to be able to perform essentially everything that a worker can perform. That means the more of those machines you have, the bigger your economy can get. So that's kind of the first feedback loop, and as you accumulate more and more of those, the economy can grow. Then the second factor is, if you have really powerful machines and really smart machines, they can not only produce more, they can also engage in more research and development, and that can advance our level of technology. That means it allows us, for given resources, to squeeze more out of the economy, to combine capital and labor, or only capital and AI machines more efficiently to produce more.
SPENCER: And here, machines refers to not necessarily physical machines. It could be just AI running on a server as well, right?
ANTON: That's right. In practice, it's going to have to be both. It's going to have to be the intelligent component and the physical actuators, meaning robotic.
SPENCER: So having machines, you can essentially make more machines faster, and so it creates this kind of feedback loop system.
ANTON: Yeah, and now, what makes those machines better? There are two more feedback loops. The first one is software development. That's where we are the closest to recursive self-improvement right now. If you have better software, that software can contribute to advancing AI systems going forward, making them more efficient and basically squeezing more intelligence out of a given amount of compute. The second part is you can also use that software to design more intelligent hardware and to design more advanced chips. Those better chips feed back into allowing you to run more of the software, and they mutually reinforce each other into a virtuous circle.
SPENCER: So imagine AI's doing research on how to make AIs more efficient. To achieve the same amount of intelligence for lower compute. AI is researching how to make chips more efficient or use less energy, etc., and then that's all kind of feeding back.
ANTON: Hopefully, in the end, the AI will also be doing research on how to heal our diseases, how to allow us to live longer and more happily, and so on.
SPENCER: When you put in these feedback loops, I imagine they're difficult to model. I imagine it's very sensitive to assumptions. Do you feel like it's something we can actually get a handle on, or are we really just in a situation where we have huge error bars on any kind of estimates?
ANTON: Yeah. There are error bars, but at the same time, I have come to the conclusion that some sort of takeoff in the next couple of years is quite likely.
SPENCER: And takeoff here means you're getting this recursion that's rapidly improving productivity. Is that right?
ANTON: Yeah, that's right. So at first, if we have only the software side automated, it's going to be a little bit slower still, but if we have the software and hardware side automated, then I think economic growth is going to proceed significantly faster than today. But then a lot still depends on how fast we actually want that process to roll out and how fast it is.
SPENCER: Now, one thing that comes to mind for people when we talk about AI getting better and better is that people might be replaced with AI. If people are replaced with AI, you might think that, "Well, that means people are going to have less money, and they're going to have to save more of the money that they do have." So does that mean people are spending less? Isn't that sort of working against the whole phenomenon? Isn't our economy essentially about transactions occurring, and if people have less money, they do fewer transactions?
ANTON: I think that is a very reasonable concern in the short run. In some sense, you can say that in the long run, the economy produces what those who earn the income spend. Right now, our economy is oriented very much at producing the goods and services to meet the needs of people like you and me. If the economy of the future were to distribute the income much more unequally, then a lot of that production is going to have to change. It's going to have to shift from what you and I want to consume to what the multi-trillionaires want to consume. That won't happen overnight, and that's where that aggregate demand shortfall that you described before may come in. There may not be people consuming the goods and services that we have available right now, and that is indeed a significant concern. What might happen is, let's say that if we have a wave of automation reflected in jobless numbers, people start spending significantly less, we can't quite see a lot of the productive gains that will come yet, and during that transition, we may actually experience a slump. We may actually experience a recession. Now I should add, nobody knows quite how the timing is going to work out, so I think it's something we should look out for, something that is a real concern. But it may also be that we are going to see the productivity gains first, and that those aggregate demand shortages may not turn out as badly as what we just walked through or what the Citrini Research report described.
SPENCER: What did they describe?
ANTON: They essentially described a situation where lots of people, lots of white-collar workers, lost their jobs, and that led to aggregate demand reductions, which kind of led the economy into recession.
SPENCER: Before we get more into the distributional effects and job loss, I'm curious what other economists say about your models. What kind of critiques do you get? What do people say when they say, "Hey, you're totally crazy. There's no way it's going to create 20% productivity." What are their arguments?
ANTON: Yeah, I've certainly gotten that line quite a bit, that I'm totally crazy. I relate to it because right now and tomorrow, I don't think we are going to see 10% growth. Frankly, we haven't seen those kinds of numbers here in the US, essentially ever, except very shortly, bouncing back from a recession or something like that. It's something out of the ordinary. It's something that's very different from the historical norm. In fields like economics, the historical norm is a pretty good guide to the future.
SPENCER: But you always have to be humble about predicting things that are out of the reference class of what we've seen before
ANTON: I think so, yeah, I think it's reasonable. I'm relatively certain that this kind of growth will take place. I can't tell you if it's going to be in three years or 10 years. There's a lot of uncertainty about it. But how should I say I relate to my traditional economics colleagues who say, "Oh, this is crazy."
SPENCER: It's funny because, for an economist, this is a wild projection. It's an incredible projection. But then you compare it to people like Ray Kurzweil who are saying, "We're gonna merge with the machines." It doesn't sound that ambitious or the AIs are gonna kill us all. It's funny how it's sort of conservative relative to those predictions.
ANTON: Yeah, I spoke to a very famous economist a couple of weeks ago, and he said, in economics, history is a better guide to the future than science fiction. The only thing I would point out is that we are not talking about science fiction. We are talking about extrapolating relationships, a general version of scaling laws, and just looking at how much better the technology has gotten in the past few years, and that there doesn't seem to be a clear ceiling in place. There are very solid and grounded, scientifically grounded, reasons to believe that the future may change quite dramatically, although we have to be humble and acknowledge that they are extrapolations.
SPENCER: What do you think is the strongest argument that they give against your model?
ANTON: The strongest argument against a growth explosion is probably that there may be strongly decreasing returns to scale in having better systems. What that means in non-technical language is we have this notion that if the AI gets smarter, it can do more, but maybe it can't do all that much more. Even if you are far smarter than anybody in the universe, maybe that allows you to solve certain problems a little better, but it still won't allow you to perform magic. Those extra returns to intelligence at some level may not be that high, and the extra costs for it are going to be high. If you look at where the costs and benefits curve intersect, perhaps it won't buy you quite as much as people are expecting. I think that would be a pretty strong argument.
SPENCER: Yes, interesting argument. On the one hand, my intuition is that there aren't those kinds of limits. Having an Einstein can push you forward in physics significantly.
ANTON: I know, right? That's my intuition, too.
SPENCER: And then there are these breakthroughs that change the game, and that intelligence really can continue to help you make bigger breakthroughs. On the other hand, maybe you can make an evolutionary case that evolution certainly seems to see diminishing returns to intelligence. It's not like evolution is just making all creatures smarter; it's clearly not doing that. Humans got smarter, but many creatures have stuck around for millions of years at roughly the same level of intelligence. Not to say that we can easily put intelligence to a single number when it comes to animals, but clearly, it wasn't just maximizing intelligence that leads to superior survivability.
ANTON: Yeah, that's a really interesting case. You would essentially say, well, evolution hasn't made us smarter than we are because there's no point in being much smarter.
SPENCER: Yeah, I don't know. Obviously, it's optimizing for something different, like survival, exactly. But if being super intelligent made you kind of this magic wizard to control reality, you think that would be pretty good for the survival of your genes. It's interesting to think about that.
ANTON: Yeah, I agree.
SPENCER: Let's talk about distributional effects. When you model it, do you also have a model for how that wealth is distributed in society, or is that hard to tell?
ANTON: I have done some research on that question. That's in different models, though, and I haven't put the two together yet, I should say. But what are going to be the effects on income distribution? I think what is almost certain to happen is that the labor share of the economy is going to decline. That means the total fraction of economic output that goes to paying workers is going to decline. Now, the big question for most of us, for all of us who are salaried or on wages, is whether the actual level of wages is going to go up or down. Let's say, in 20 years from now, labor earns a smaller share of the economy, but output has grown so much that we are still five times wealthier than we are today. That would be a pretty good outcome; being five times wealthier in 20 years has rarely happened in history.
SPENCER: People might hate trillionaires, but five times more money, they'll probably get over it.
ANTON: Right, yeah, I think I would be pretty happy with that outcome. But that's not guaranteed. It could also be that if these machines become very good substitutes for labor, demand for labor could shrink so much that the actual level of wages could decline. I would say in some corners of the economy, that's actually quite likely. But the big question is: is it going to happen to the overall wage level? Let's take a job where we can see, using the current technology, how a lot of people might lose their current jobs, which would be, for example, call service center agents. There is a lot of potential for automation there, and if systems that can perform this function don't need to be able to do it perfectly, but if the system fails, it can escalate you to a human, and through that mechanism, it can essentially reduce demand for call service center agents by 90%. Workers in that sector will almost certainly be hurt by that, but if they can switch to something else that is significantly better paying, then in the medium term, it wouldn't be a bad outcome either. We have oftentimes seen individual jobs that get automated, and if that's concentrated in one geographic area, then that's hard to swallow. But if it's, for example, jobs that are in a city that is pretty dynamic, and people find better paying jobs, then it's not that bad of an outcome. But the big question is, if it happens on a bigger scale, if it happens to jobs across the economy, then it could be really hard to swallow for our society, and it could lead to a lot of discontent, a lot of polarization. Yeah, it could lead to a world that I'd be pretty concerned about.
SPENCER: I'm concerned about that too.
ANTON: We don't know how the productivity gains and job losses are going to be sequenced. It might actually work out. It might be a bit more benign than the pessimists are concerned about. But the problem is, nobody knows. As economists, we usually say, "You should hedge your bets. You should be prepared for the bad cases, the bad outcomes, and hope for the good scenarios." But you should not be blinded by optimism.
SPENCER: Let's walk through a specific example, because I think it helps illustrate the kind of employment effects. Imagine there's a certain type of doctor, and one thing that they do, a particular task they can now do with AI, and let's suppose that task, AI is faster and more accurate than the doctor. And let's suppose that was 10% of what they were doing. So before, they were doing a whole bunch of stuff. Now 10% of it is automated. The AI does it faster and better. To me, that doesn't immediately seem like it implies you get fewer doctors. Am I correct in that?
ANTON: So if it's only 10%, then probably it won't change that much. In some sense, let's apply your thought experiment to the entire economy. Let's say we suddenly have AIs and other machines that can do 10% of everybody's jobs. That would actually be really neat as well, because it would mean that we have 10% higher productivity; you can get 10% more work out of any given hour of work, and that would immediately make us wealthy. If it's so evenly distributed, it means there are essentially no losers from this. It would be an amazing development.
SPENCER: And let's now zoom in on that 10% and suppose that 10% was something really important to the value of the product. So let's suppose it was the accuracy of their diagnosis. Let's suppose that without that AI — and I'm just making this up, obviously — let's say the doctor was 80% accurate at the diagnosis, this type of doctor, and let's say with the AI, they're 99% accurate. Well now it seems like this is a huge win because it might actually make going to the doctor more valuable. You can even imagine a world where rational actors would purchase more time from doctors.
ANTON: Yeah, totally.
SPENCER: So would we expect, in such a case, at least in theory, that employment might actually increase for doctors in that world?
ANTON: So let's say it saves only 10% of their time and makes their services a lot better. Yeah, that would be plausible. So it's not a guarantee. A lot of medical services are pretty inelastic, meaning you're not going to consume a lot more just because it is slightly better. Let's say, for example, your annual physical. You're not going to go there every six months because it has gotten much better. But maybe for some things, you might.
SPENCER: Maybe doctors are a weird example, but let's say it was they could reduce your pain more effectively or something, and maybe you would be more likely to go to the doctor for chronic back pain or something. So I guess the reason I bring up this example is it shows that AI automation doesn't immediately mean job losses. It doesn't automatically mean it, and at least in some cases, it could even cause an increase in purchases of a service if that's very much better.
ANTON: Yeah. And if we go back again to the Industrial Revolution, in some sense you can see the past 200 years have been a progression of automating one thing after the other, first spinning and weaving, transportation, tractors, and that's the basis of why we are so much wealthier today as a society.
SPENCER: So in addition to the effects on that profession, there are often effects on society, like costs getting lower or services improving. If a certain job is automated, now you can buy it cheaper, or maybe it's done better in the automated form, more consistent.
ANTON: Exactly. Yeah, there is a direct correspondence between what we call productivity gains in economics and cost savings. If you save 50% of the cost of something, another way of putting it is you are 100% more productive because you can produce twice as much with a given amount of inputs.
SPENCER: As I understand it, when you talk about income, what you care about is not literal dollars, but you care about how much you can buy with those dollars.
ANTON: Exactly.
SPENCER: Things getting cheaper is sort of equivalent to making more money. It doesn't matter at the end of the day.
ANTON: Things getting cheaper is equivalent to having more purchasing power and having more goods and services that can underpin our material well-being.
SPENCER: Now let's talk about a different case of AI automation, let's use doctors again, and I'll just use radiologists as an example. Suppose, and this is not true today, but suppose that AIs could do everything a radiologist could do as well or better. What actually happens? So, we woke up one day. Presumably, hospitals are now, the cost-conscious ones are all like, "Oh, okay, we're going to start replacing our radiologists," and let's assume it's legal. The laws allow it.
*ANTON: That's a big part of it, yeah.
SPENCER: Because there could be regulations that prevent them from doing it, but suppose they're allowed to do it. What do we actually see happen?
ANTON: Yeah, so there's the regulations, and then there's also the liability question. Part of what it means to be a professional is to be responsible for things. But let's keep those out of the conversation for the moment. And let's say we have this job. Let's say we have a kind of robot radiologist with a humanoid face who can literally perform all the tasks and can do it perfectly. That would be pretty close to having a perfect substitute for the human worker. The example is so interesting because it shows you all kinds of reasons why human workers may still stay in the loop, even if you have these amazing systems. For example, lots of patients are still going to want to communicate with a human rather than with the AI system. So we have to make lots of assumptions, actually, for it to be possible to roll out this machine instead of the radiologists. But let's say, economically, if all those assumptions are satisfied and if we roll out the AI radiologist instead of the human radiologist, then yes, all those human radiologists could lose their jobs. And you know the almost 20 years of education that they have put into becoming what they are suddenly becomes worthless.
SPENCER: Yeah, so let's keep going on this thought experiment.So suddenly — let's imagine — hospitals don't adopt this instantly. Maybe they will start rolling out these AI radiologists over a few years. There are fewer and fewer jobs, and suddenly all these radiologists are without work. Presumably, many of them are like, "Okay, what can I do that's adjacent? What's the closest thing to my training? Maybe I need to get some additional training to switch." So what would be the impact of that?
ANTON: Yeah, maybe they can switch into general practitioners for some time, which is a job that requires education. You still have to go through medical school, but you don't have to do the radiology specialization. Then you're going to have this flood of specialists entering general medicine, and that's going to exert pressure on incomes there. It's going to exert pressure on wages, and ultimately, it's going to hurt all the other doctors who practice general medicine because you suddenly have a lot more supply for a given amount of demand.
SPENCER: And in addition to that, I imagine you could see prices going down for general practitioners.
ANTON: Yeah, exactly. Which is going to also feed further into the wage declines in the sector.
SPENCER: Yeah. So it seems like the effects get really complicated. If you think about an individual radiologist, if they're near retirement age, maybe they just retire early. Now they have less savings. Maybe someone who always wanted to be a musician quits and starts a music career. But probably most of them try to go to the most adjacent, well-paying job, flooding the market. Suddenly that market is now depressed, the prices drop, so you start having rippling effects throughout society. Not all bad because now maybe people can get better GPs at a cheaper price, and maybe that actually benefits consumers. So, there are some losers and winners.
ANTON: Yeah, certainly, the education that they have gotten would have to be written off. In that case, it's a legacy asset. The example is neat because the concern is that many of us white-collar workers may become the proverbial radiologists in this example. But of course, we should also keep in mind Geoff Hinton, one of the godfathers of AI, already said, I think it was 11 years ago, that we should stop training radiologists, and for now, we still very much need them.
SPENCER: Yeah, that's why it's also a funny example, because people have been talking about them being automated forever, but it hasn't happened. Now I have a quote from you I want to read, and I'm curious to have you explain this to us. If there are substantive gaps and things only humans can do, then humans will be very complementary, and wages will rise significantly. Could you explain that? How could wages rise?
ANTON: Yeah, so that's kind of the optimistic scenario. Now the one that I'm crossing my fingers for, if there are things that only humans can do, then humans remain the bottleneck in the process of production. You have these amazing machines that are incredibly productive. You have lots and lots of them. But let's say, for example, every time a machine produces something, you need the human to give final approval. That final approval becomes incredibly valuable because if you don't grant it, all the output produced by the machine is worthless. This is kind of an extreme example. But whenever you have something where humans are complementary to the output produced by the machines, they become more valuable. In some sense, you can say that's the story of the Industrial Revolution. We have automated other things, but we have become complementary to the machines. To make it tangible, at first, we automated spinning and weaving, but you needed human operators for those machines, and those human operators could actually oversee the production of quite a bit of output. We've done this with lots of industrial processes; the humans are now overseeing their machines and are a lot more productive. The same thing happened in agriculture; the humans are driving the tractors, and the tractors are much more productive than what we used to do before. Then it happened with software and white-collar services. We are overseeing our machines, and we are much more productive in any services now than we were a hundred years ago. If that continues to be true, then we are going to continue to see wage gains, and we're going to continue to see broad-based welfare increases through the labor market.
SPENCER: But to be clear, when did those transitions happen? When we went to these different forms of automation, there often were losers who lost pretty badly. Is that right?
ANTON: Absolutely, in the short term, you had transitions where, let's say, the artists and spinners and weavers were put out of a job. They lost their livelihood, and in that case, it actually took more than a generation for the economy to adjust and for the children, or children's children, to find jobs that would pay them more than what their parents and grandparents earned as artisans. So the transition itself can be very painful, but it's not always painful. If you think of the internet boom of the 1990s, or I think it's actually more accurate to call it the computer boom, because that's when the computer revolution that had been ongoing since the 1980s was finally reflected in productivity statistics, that process involved transitions, like you had typewriters transition into other kinds of white-collar positions. Frankly, the 1990s were not a very painful period; you had sufficient growth and enough demand for labor that the gains were pretty broad-based. So I wouldn't say it has to always be a negative transition.
SPENCER: And I guess in some cases, it kind of went smoothly. In other cases, it actually led to unrest. You have cases like the Luddites.
ANTON: If it's concentrated in one sector, and you have this clear category of losers, and the benefits are spread more broadly across the economy, then it's more likely to lead to unrest. But if the benefits are pretty broad, and you can say that during the 1990s, the transition, the change affected a lot of different professions at once in somewhat incremental ways, then it doesn't have to be painful.
SPENCER: In your models, do you have estimates for things like job loss, or is that just not part of what you're modeling?
ANTON: I don't have any confidence in the exact numbers there, I have to say. So I wouldn't rule out that we see really significant disruption. But I also wouldn't bet on it. I wouldn't rule out that we can digest this in a way that is relatively smooth. I'm hoping that it's going to be more like the 1990s than like the 1800s, but it's so hard to tell.
SPENCER: So we've talked about GDP going up, we've talked about the distribution of income, but we haven't yet talked about other values potentially being destroyed. At the beginning of this conversation, we talked about how GDP, while a metric, tends to track human well-being. When it goes up a lot, that tends to be a good thing for well-being because people have more material comfort.
ANTON: Yeah, that was your third factor, right?
SPENCER: It's the third one, exactly, exactly. And I think some people are worried that AI might destroy other things or maybe disconnect GDP from well-being. Do you think there's a reason to think that GDP increases from AI might be different and not lead to as robust well-being effects as some historical GDP increases?
ANTON: Yeah, I actually want to broaden it even further. You said there could be adverse facts that are not reflected in GDP, but there could also be lots of positive effects that are not reflected in GDP. If I look at how I use the technology today, there are lots of situations where, let's say I have some weird medical pain. I look up on the AI what that means, and it can give me instant services at 11 PM, and I don't have to go to the doctors.
SPENCER: You've been reducing GDP, right?
ANTON: Maybe it has been reducing GDP, but when something like that happens, I appreciate it. That would be the positive side of the coin. On the negative side, I guess we have all known, we've all read news reports about AI systems that essentially talked people into suicide. So I would say both of these sides exist. I am somewhat hopeful that there are going to be lots of positive aspects to it, but at the same time, we shouldn't sugarcoat the negative ones.
SPENCER: Absolutely. We actually ran a study looking at a number of different potential concerns about AI and asking people how concerned they were. There are a lot of different concerns people have, everything from misinformation and automated manipulation to scams to increasing authoritarian control. You can imagine authoritarian governments monitoring emails or monitoring telephone calls. It's one thing if a human has to read it or you just use keyword matching. It's another thing if you could have an AI read every citizen's email communication, phone communication, and score them on whether they are faithful or do they give the committee that runs the country what they want, etc.
ANTON: So you could have an individual agent tracking everyone, and if it's a really smart agent, that could be really dystopian.
SPENCER: Exactly, exactly. So I think there are a lot of potential concerns like that. But what about on the issue of whether somehow the GDP from AI won't have as much benefit as it normally does? Do you think there's any reason to think that, or is it more just that there might be these externalities to be concerned about?
ANTON: Yeah, so that's where my previous point comes in. Should we worry that the GDP increases won't reflect the negatives as much, but should we maybe also worry whether it reflects the positives as much? Because in my example of the chatbot that gives me an answer at 11 PM that is really valuable to me, that's not captured in GDP. I only pay my 20 bucks a month subscription, but in this case, it was actually worth hundreds of dollars to me. So it cuts both ways. It cuts both ways. And, since I think it's such a revolution, not necessarily the current AI. The current AI is very powerful in many ways, but I think the most powerful AI is yet to come. Since it's going to be so transformative, I think both the positives and the negatives are going to be humongous. And I think, yeah, by all means, we should expect that a lot of those positives and negatives are not going to show up directly in GDP, but I wouldn't have a sense that one is going to clearly outweigh the other. You can say maybe it's going to make GDP less valuable as a measure because there's going to be so much extra going on.
SPENCER: Do you think that economists are applying standard economics, but that somehow that doesn't capture what's going on here with AI well enough? For example, the recursive effects you described, is there something about the way economics models things that does not prepare economists well for the situation?
ANTON: Yeah, it depends on the time horizon that we're looking at. For the current AI, I think our models are a relatively decent guide for how they affect the economy and what to expect. Let's say, the future of self-improving AI, agents that are very autonomous, I think it's true that current economic models have a hard time understanding that type of world because it's going to be a very different world. Ultimately, if we try to project forward even further into a world where we may potentially have superintelligence without the decreasing returns that you and I were talking about before, you could make the case that even the current way our economy is structured, with lots of decentralized market transactions, is going to change fundamentally. Maybe you're going to have just a handful of really huge AI systems that are going to transact in very different ways. I can feel that there's something there, but frankly, this is so far from the way the world is currently organized that I have very little ability to project something. It's almost past an event horizon.
SPENCER: I think, if I'm not mistaken, the original concept of the singularity was not this idea of humans and AI merging or something like this. It was a point past which things became unpredictable.
ANTON: That's kind of how John von Neumann originally framed it. He said something like, "It's a point beyond which human affairs will not continue the way that they currently do, and that the world will just be fundamentally different."
SPENCER: For some reason, I thought it was Vernor Vinge who coined it, but maybe it was von Neumann.
ANTON: Yeah, von Neumann was actually the first one who coined the term singularity, and then Vernor Vinge added a lot more flesh to it, but that was decades later.
SPENCER: Got it. In terms of the economic modeling, one thing that confuses me is that in these production functions, when you're thinking about productivity, you've got this capital input and this labor input, but it feels to me like AI is not quite capital, not quite labor, and it's something in between. Am I right about that? Or can we say, "Oh no, it's just capital?"
ANTON: Yeah, in some sense, the reason why we wrote down these production functions during the Industrial Age was because we simplified what was around us, and we picked the two most important factors and said, "Well, that's what output depends on." If you look at pre-industrial times, you didn't use capital and labor, but you used land and labor as the main factors. At some point after the Industrial Revolution, capital became so important, and land is just this tiny thing. Land still matters today but economically it's more like a rounding error. I think what you propose is that our economy is going to look radically different, and therefore our production function, the way we measure it, is going to look radically different. I think that's very likely to be true. Some economists are thinking of the economy as something that combines hardware and software, and maybe that's going to be a neat and useful model to describe how output will be produced in the future. Right now, you can say if you are in a pure white-collar job, you are contributing software. If you are in a blue-collar job where you are mostly performing physical labor, you're contributing hardware. In some sense, AI systems and robots can also do these two things. We know that the production of output as a whole requires hardware and software. It requires the physical and the cognitive part. Perhaps that's going to be a really useful description of the economy.
SPENCER: One critique that's been levied at the idea that AI is going to be such a huge deal is that the cost of building these AIs seems to have gone up exponentially if you just track how much money is being spent on them. There are some other analyses looking at things like, "Well, how much additional money they have to spend to get these increasingly better models, and it seems like it's exponential." I haven't dug into the details of that, but have you looked at that, and do you think that changes any of this analysis? Or does it actually not matter?
ANTON: I do think that that's an important factor. Over the past 15 years or so, the amount of resources going into AI has tripled every year, or at least the amount of resources going into training the top AI systems.
SPENCER: You can't cripple that for that long every year.
ANTON: At some point, it's going to be as big as the entire economy. And then it would have to be bigger than the economy, and nothing can be financed that's bigger than the economy itself. We wouldn't have those resources. So I think what's going to happen is, in the very short term, the next couple of years, it's still possible to continue on that path, although financing it is getting progressively harder. Now, to me, personally, one of the big questions is whether we will reach recursive self-improvement or AGI before or after the point where it gets really hard to continue the scaling. If we reach it before that point, then that development in itself is going to give rise to a growth takeoff. If we run out of our ability to scale before AGI is reached, then we may see something like a period where progress will slow down a bit, and it could take quite a few years longer until we reach AI that can fully perform human-level intelligence. But I'll also say right now, it doesn't look like that slower trajectory is the likely one.
SPENCER: Yeah, there's this interesting balancing act where as long as it continues being exponentially more expensive to keep pushing out the next models, AI companies have to raise more and more capital. Even if we're on a trajectory where AI is going to be phenomenally transformative, it's still possible AI companies run out of money if at some point they just can't raise that next round of financing, if the amount of profit or revenue they're making doesn't keep up with the growth in fundraising. You can imagine investors getting scared and souring and the whole market collapsing, even if we're on this transformative trajectory. So this interesting thing where I think that even if AI is incredibly powerful, it doesn't guarantee that we can't have an AI market collapse in the interim.
ANTON: Yeah, I think that's right. To add to that concern, in some sense, the systems that we have right now are not paying for themselves. If you look at all the published financial projections, these companies are predicted to make losses for some years still. In some sense, it all depends on the ability to convince investors that this is something worthwhile, and we know that investor sentiment is always a little bit fickle. If that investor sentiment were to dry up, then they could not continue the scaling.
SPENCER: Another thing people talk about sometimes is that technology can take a long time to get adapted. Even if you have incredible technology, it might take a while for all the companies to be using it and jobs to be replaced, et cetera. Does that act as a small delay, or could that actually be significant when we think about the effects of AI in society?
ANTON: Yeah, that's an important question. My hunch is it's not just a small delay, but also a medium delay. At the same time, you can see the capabilities of these systems, and eventually they will be rolled out. Traditionally, there have been these S-curves of adoption. There are some early adopters, then late adopters, and so on. There are also interesting economic theories of technology adoption, and one of the factors they emphasize is that having a very skilled workforce makes technology adoption easier and faster. AI could, at some point, actually also be that skilled workforce and could roll itself out. We are seeing this in some ways already; an agentic system can spawn sub-agents and can essentially hire digital workers for tasks. That is essentially speeding up the rollout of AI technologies that are being used.
SPENCER: To be honest, I've been mind-blown by the speed of AI adoption. If you look at how quickly ChatGPT started getting used by consumers, it's one of the fastest growth curves in history for a product. If you look at AI rollout by companies, it's almost unbelievable how quickly companies start adopting AI into their products. Maybe part of that is just that it's an easy technology to adopt in the sense that you can just stick in an API call and have an AI do something for you. It doesn't mean it does exactly what you want, but it's easy to integrate it.
ANTON: In some ways, many companies are saying in service that they have adopted AI, but then you can't really see anything in their productivity or profit numbers that indicates they have adopted it very productively. My sense from speaking to lots of non-tech companies is that they are actually struggling. They are indeed making genuine efforts to integrate AI, but many of them are still figuring out what the most productive use cases are and how to restructure their organizations to take advantage of it. That's oftentimes one of the slowest things.
SPENCER: The last time I logged into Facebook, it told me that there was some new AI that creators on Facebook can use. I was like, "What is this thing?" It basically said that it would automatically read my past posts, write five new posts for me, and then automatically post them one a day. I was like, "What is the slop engine?"
ANTON: Very interesting.
SPENCER: That's one step away from just why you even have people on Facebook.
ANTON: Just so that's almost like Moltbook.
SPENCER: Yeah, exactly, except in theory, with humans watching. Do you think in a world with transformative AI, if it goes the way that you're predicting, it makes sense to invest in something like the S&P 500?
ANTON: Well, the first thing I should say is never listen to an economist for investment advice. There are some people that say you're actually doing really well if an economist recommends something to short that thing.
SPENCER: It's funny.
ANTON: And again, with the understanding this is no investment advice. The thing about the S&P 500 is that it essentially captures a really large part of the economy and a really large part of all the listed companies, and that means your risk is spread relatively well. If you think that there's broad-based economic growth, then probably that's going to be reflected in that kind of index as well.
SPENCER: I suppose a counter case could be made saying, if it turns out it's just a couple of private companies that have the most powerful AIs, and those AIs can essentially do nearly anything that a human could do and start replacing labor all over the place, you could imagine that could lead to the value not being reflected in the S&P, and, in fact, maybe even the S&P collapsing as those AI companies start replacing all the work of other companies.
ANTON: Yeah, that is actually an interesting argument. Let's say, if we continue on the current trajectory, there are a handful of companies that develop really powerful AI systems, my expectation would be that companies across the economy are going to continue to adopt these systems, combine them with their in-house data and expertise, and let's say there is no reason why OpenAI should produce a competing Coca-Cola. We would expect that Coca-Cola still exists, even if we have really powerful AI systems.
SPENCER: Yeah. Interesting point. So before we wrap up, a couple more questions for you. Do you think that we need some kind of new taxes if transformative AI indeed starts to happen?
ANTON: Yeah, I have actually written two papers on that topic recently, and it seems to me that right now, our tax system is primarily based on taxing labor. If we believe that the labor share of the economy is going to go down significantly, I don't think that's a good idea, and I don't think our governments will be able to fund themselves productively based solely on taxing labor income. It would probably be a good idea to shift from labor taxation at first through consumption taxation. If labor really becomes a very small part of the economy, it may also become desirable to shift towards capital taxation. I guess I can leave it at this.
SPENCER: Final question for you. Besides taxation, if government officials were to take the idea of transformative AI seriously, what would they do?
ANTON: There are so many areas in which it will fundamentally transform the world. My impression is that they are taking it more and more seriously. I'm obviously an economist, so that's kind of my lamp post under which I'm searching for economic solutions. I can tell you, even if we just limit ourselves to the economic domain, there are going to be lots of challenges. We discussed several of them, like income distribution, for example. But if you go beyond that, there are many challenges that will be of a political nature, and there will be civic challenges. What does it mean to live in a world where we are no longer the most intelligent beings? Ultimately, how can we make sure that these intelligent machines still care for our well-being? I think those questions are going to become probably the most important ones in that far future of AI.
SPENCER: Anton, thanks so much for coming on the Clearer Thinking Podcast.
ANTON: Thank you for having me.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: