with Spencer Greenberg
the podcast about ideas that matter

Episode 136: Why capitalism doesn't live up to its promises (with Martin Schmalz)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

December 15, 2022

Why doesn't capitalism seem to be living up to its promises of free and fair competition, low prices, and high quality goods and services (at least in Western nations)? What did Adam Smith fail to foresee about the ways economic systems would change on the way to 2022? What is "common ownership", and what effects does it have on economies? What interventions should be implemented to keep an economy healthy? Is it easier to put pressure on business leaders or politicians? In terms of shifting incentives for the sake of mitigating climate change, how effective is it to divest from "brown" businesses and invest instead in "green" ones? What is the AI revolution really about? Is it conceivable, even in theory, that AIs could make predictions in "uncharted territory" where the present is completely unlike the past? (But for that matter, how well can humans make predictions in such cases?) Is the hubbub around AI just a distraction from other more important issues? How can we keep AIs from reinforcing existing biases?

Martin Schmalz is Professor of Finance and Economics at Saïd Business School, University of Oxford. He holds a graduate degree (Dipl.-Ing.) in mechanical engineering from the Universität Stuttgart (Germany) and a M.A. and PhD in Economics from Princeton University (USA). Prof. Schmalz is the Academic Director of Oxford's Blockchain Strategy Programme, and co-director of the Open Banking & AI in Finance Programme. He co-authored The Business of Big Data: How to Create Lasting Value in the Age of AI, and was featured as one of the "40 under 40" best business school professors worldwide at the age of 33. Read his writings on his blog, learn more about him on his website, and follow him on Twitter at @martincschmalz (governance & antitrust) and @oxfordfrom (everything else).

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Martin Schmalz about the concentration of wealth by large investment firms, consumer protection and transparency, and evaluating AI for biases.

SPENCER: I'm really happy to tell you that today's episode is sponsored by Give Directly. Give Directly is a global nonprofit that lets you send money directly to people living in extreme poverty with no strings attached. So you might wonder, why give people cash? What's the point? Really, the idea is that cash enables people to use the money for whatever they think is best. And we're talking here about people living in extreme poverty. So they tend to have really good uses of money that are going to really improve their lives. They do things like use this money to make sure that their family has at least one meal per day, or to cover school fees so the children can get their primary education, or to add metal roofs for protection from the elements because a lot of people live with roofs that leak so their stuff is destroyed, they get all wet, they can't sleep, etc. If you're interested in learning more, you can go to, and you can send money directly to someone living in extreme poverty so they can spend it on what they need most. That's

SPENCER: Martin, welcome.

MARTIN: Thank you very much for having me.

SPENCER: Most of us live in capitalist societies. And this argument that we all hear is that competition is really good because if you have companies competing with each other, that will tend to improve the quality of goods as they compete over quality. And it will also tend to make goods cheaper as they compete for price. But as you well know, this doesn't always work out. And I know that you've had some interesting insights in this area. So why don't you start us off telling us about why this doesn't always work out?

MARTIN: Exactly, as you said, like Adam Smith, in his "Wealth of Nations" basically explains how self-interest behavior actually leads to — nowadays, we would call it — consumer welfare, but it does not always work out that way because the assumptions or the conditions of Adam Smith's time no longer hold. For one, in the Adam Smith world a few hundred years ago, companies tended to be owned by the manager who ran them at the same time. So in economists' peak, that would be there were no agency issues, no shareholder or managerial agency issues. That's one. The other one is that the owner of one business did not also tend to be the owner of another business at the same time. But thanks to the inventions of modern finance, and the teachings of finance professionals, like myself, over the last few decades, now, every self-respecting investor holds a diversified Vanguard or BlackRock Mutual Fund, ETF, and index fund that holds many different firms, including firms in the same sector. So really, each one of us is a shareholder, not only of one firm, but also of its competitors. So, the same logic that applied in Adam Smith's time then simply doesn't apply anymore. So if, in Adam Smith's time, one bakery wanted to undercut the other bakery in price or bake better bread to win market share, why would that still be the case if the bakeries have the same shareholders, then they'd rather act as divisions of the same monopoly. And that's simply what we more and more observe in US corporations, that they more and more have simply the same shareholders, or the same relevant shareholders at the top with less and less variation in that.

SPENCER: So the basic idea is, if you have two competitive bakeries, they're going to compete with each other on quality or on price. But if you have two that are owned by the same owner, they might have an advantage in branding them as different bakeries. So you're not aware that they have the same owner, but they're not actually going to be in competition with each other. Maybe one of them will be, for example, the kind of fancy bakery, and maybe the other one will be the lowbrow bakery or whatever. But they're just trying to maximize overall profit from the two joint ventures. And so there's no actual competition occurring.

MARTIN: That's exactly right. So perhaps to illustrate this, this basically happened to me, not in the bakery sector, but sunglasses about 10 years ago, when I walked through an airport, and I saw these different sunglass shops selling all these different brands of sunglasses. And I just wondered, how do the sales arrangements work between these sunglass brands and these shops that set themselves up in the airport? What I then found out was that all these shops in the airport and all these different sunglass brands, all of them are owned by one single company called Luxottica, which is owned by one of Italy's richest men. Or you go read a Wikipedia page on LVMH, the French luxury fashion brand, and check out all the different brands that belong to them. So a lot of the apparently different businesses that you see when walking through an airport are really divisions of the same monopoly. And similar logic then just applies also in other markets, where it's not literally the same firm that has subsidiaries that sell the same products or similar products (differentiated products), it's still separate firms, but they have the same shareholders. And it just turns out that they increasingly behave as if they were part of the same firm when they have the same shareholders.

SPENCER: So what you're saying is that even if they're not directly owned by or run by the same manager, if the large investors in those companies are the same, then effectively, they may not be in true competition with each other.

MARTIN: Exactly. So there was an episode a few years ago when Warren Buffett bought the largest stakes in America's four largest airlines. And after that, the airlines simply competed less aggressively with each other as a result of that acquisition. And it turns out, the profits go up. But of course, the airline passengers are not particularly happy about it. So this is another example of where it happens in publicly traded firms.

SPENCER: So one thing people might wonder is, suppose you had one manufacturer making most of the sunglasses, why aren't there just a bunch of competitors that come into the market saying, "Well, they're artificially keeping the price high, so let's just make new sunglass company that undercuts them." And then now you're back into the competitive landscape.

MARTIN: That's true and it happens. One reason is because then Luxottica buys them. Because it turns out to be more profitable than to let them do that. Another thing is, because of what you call vertical integration. If you also own the sunglass shops at the airport that then distribute the sunglasses, then it doesn't really help a new sunglass manufacturer coming into the manufacturing market, because they would also need the distribution channels. So there's really a benefit of being the owner, not only of the sunglass manufacturing companies, but also of the distribution channels, as well as all the insurance companies, like eyeglass insurance is also owned by the same company. So this vertical integration can also make entry by new entrants harder. And in different industries, say the airlines, of course, there's regulatory hurdles to enter. I, for example, in the US could not be a majority owner of an airline that operates domestically in the US. So there are barriers to entry and also through regulation. The same is true in banking in many other sectors as well.

SPENCER: So how feasible is it to buy sunglasses not made by Luxottica? Is it actually difficult to find ones that they don't make?

MARTIN: No.. I bought glasses from an eyeglass company in Ann Arbor, Michigan called SEE, which is not owned by Luxottica. There's Warby Parker. So there are these startups that indeed see that it probably doesn't cost $600 to manufacture a piece of plastic and distribute it. So there are definitely all the competitors that are popping up. But they simply are not nearly as large and don't have the same dominance as look celtica.

SPENCER: Got it. So it seems like what may happen is that it's pretty tough, in general, to make a successful startup. And furthermore, there are things that these large companies can do to make it even harder to compete, whether it's through regulatory barriers that the companies might actually advocate for potentially, or just through vertical integration, where it's very hard to compete with them, because they have this sort of the whole supply chain in place. And maybe they can actually keep costs a lot lower than you can in different aspects and so on. But occasionally, when these companies do appear, companies like Luxottica, might just try to buy them. And even if they don't get bought, it might be that they just have a small amount of market, so you may not encounter them.

MARTIN: That's a very good summary.

SPENCER: So you mentioned shareholders and how increasingly you might have these really large shareholders that own parts of many companies. You give this example of Berkshire Hathaway owning a bunch of airlines. I wonder if this only really has a significant effect when you talk about really large shareholders, like someone who owns a significant percentage, or a company that owns a significant percentage of multiple companies. Because I imagine that most shareholders, like if they're just owning the S&P 500 through an ETF or something, they're not actually actively engaged in such a way that would reduce competition between those companies.

MARTIN: Yeah, I think that's exactly right. What is expected to be necessary for that story to play out is that a shareholder that has influence in one firm also has a financial interest in the competitors. So here's an example where it wouldn't work. Who's the largest shareholder of Facebook? Well, the most influential shareholder of Facebook probably is Mark Zuckerberg, I would venture. So the question is, does Mark Zuckerberg have large financial interests in the competition? I don't know who the competitor of Facebook is. I don't know, maybe Google. But the answer is obviously no.

SPENCER: Well, they already tried to buy a lot of them. [laughs]

MARTIN: Well, that's fair enough. Fair enough. So the point here being, yes, Vanguard and BlackRock and these large asset managers, of course, have stakes in both Facebook and Google. But they don't have any sort of influence or very little influence in these companies. So these common ownership stakes across these firms probably don't matter very much, because there are larger, much more influential or even controlling shareholders in these firms. So when it starts to really matter is when these large block holders that control the firm and don't own the competition when they are not there — so you subtract Mark Zuckerberg, Sergey Brin and Larry Page from Facebook and Google, at which point suddenly the largest remaining shareholders would also be holding financial interests in the rivals. — And at that point, you get a problem. Or you think of Amazon and Tesla. Obviously, Amazon has a large shareholder called Jeff Bezos, and Tesla has a large shareholder called Elon Musk. Now, if you think of these companies, without these large shareholders, then you have a situation in which you have a lot of common ownership in which the largest remaining shareholders are say Vanguard, BlackRock, State Street, Fidelity, T. Rowe Price Capital Research, just the large mutual fund companies, which then end up being the most influential shareholders that have the largest sway in any corporate election. That is the problem that we're describing in our research.

SPENCER: I tend to differentiate between two types of companies when I think about company behavior for large companies. The first are sort of these large companies you can view as profit maximizing agents, not to say that they're perfectly rational, but they're essentially trying to maximize profit or something like profit, whether it's revenue, or prestige, or whatever. But basically, it's a kind of an optimizing machine. And then the second type, I would say, are more like owner-led companies — like whether it's Elon Musk leading Tesla or SpaceX, or Jeff Bezos — where it's really a single person who has enormous influence over the company. And then in order to understand the company behavior, it's not enough to just say, "Oh, they're trying to maximize profit or something related to profit. They're really trying to do what the founder wants them to do." And yes, they care about profits. But it's more complicated than that.

MARTIN: Yeah, I'll just paraphrase what you just said in a slightly different way, which is, how badly do you want this particular company to succeed? So the issue that I see is that Elon Musk really badly wants Tesla to be a company that succeeds. Whereas who takes that role at Delta Airlines or United Airlines? Yes, they have a CEO. That CEO probably has some incentives to make sure that company succeeds. But among the largest shareholders, who takes the role of really caring about this particular company in isolation, as opposed to the entire portfolio of stocks? And largest shareholders being Vanguard, BlackRock, State Street, and so forth. So it's really the absence of such a large, influential shareholder, who cares just about the company and maximizing that value, as opposed to also having interest in the rivals. And the reason that matters is the following: so if Tesla does really well, because it produces awesome cars that people want to buy instead of a Mercedes or a General Motors or whatever else I would buy, that's actually bad for the rivals. So if Tesla innovates or reduces its cost, that's bad news for the rival. So if you're a shareholder of both one company and its closest competitors, then innovation and cost reduction is really much less in your interest than if you only held a large stake in one individual firm. So in some ways, I would say that Elon Musk is just somebody who wants Tesla to succeed really badly. Like in Adam Smith's world, the baker on one side of the street wants his bakery to succeed really badly, without any regard for the success of the bakery on the other side of the street. And similarly, that's true for Elon Musk, or say Amazon. So Jeff Bezos cares about Amazon's success and not about the success of the brick and mortar retail stores that he puts out of business. Now, if he owns these retail stores as well, then he would care about the total portfolio, but not just about the welfare of Amazon, the corporation. And that's the sense in which we're saying that, for capitalism to succeed, you actually need that the influential shareholders in each of the firms cares about the self-interest of these particular firms and does not care about a huge portfolio and the total assets under management of a large fund company's sake.

SPENCER: I know that you've written a number of papers on the topics we've been discussing. What was the reaction from companies, the world, other academics when you started writing about this?

MARTIN: Well, we could spend the entire podcast talking about stories about that. After the paper got some traction, one thing that happened is that a co-founder of BlackRock wrote an op-ed in the Wall Street Journal, in which he explained that our research lacked "economic logic and factual support from the real world," which was an interesting statement for the paper that literally uses data from the real world. And a whole bunch of economic logic that has been really developed by top economists in the world over four decades.

SPENCER: Now, were you calling out BlackRock particularly in your paper? Is that why they reply like that? Or is it just sort of by implication that you were calling them out?

MARTIN: To be very clear, this problem is definitely not about BlackRock specifically. But it is true that, in the paper, we wrote that BlackRock played a prominent role, and in particular, an event in which BlackRock bought another large asset manager. So it is kind of understandable why they felt directly implicated, although it's very clear, and we're kind of explicit about that. Also, that this is not a problem that is specific about BlackRock, the company itself. But in response to that op-ed, I received a lot of emails from academics worldwide that congratulated me, because what everybody inferred was just that, apparently, our work was not stupid enough to just be ignored, but was being taken seriously. And then they started setting up websites and offering money to academics to pull our paper, to criticize our paper, publicly sponsored research, hired academics as lobbyists, and so forth. So it did become quite scary, frankly, to be a junior academic in a system where so much money was thrown around to directly go after your research.

SPENCER: What was the scariest thing that happened because of that?

MARTIN: Look, when you get a call from a journalist that asks, "you should know that a private detective is in front of your door and figures out details about your family life." That is something where I just got pretty scared, indeed. So that probably counts as one of the privately more scary episodes. Of course, professionally, it was more generally scary, because you never quite knew which financial interest was on the other side. And disclosure was simply not as widespread and transparent as one hoped it would be in academia as well. So that got a little scary professionally as well.

SPENCER: So what ended up happening with this? Did you end up writing about it more? Were there any changes made? Or did it get picked up by politicians in any way?

MARTIN: We just wrote one paper. That was, let's say, the first serious empirical exercise identifying this problem and suggesting that it is probably something that more research should be done about and that regulators should pay attention to. And indeed, this has triggered an avalanche of papers, literally hundreds of papers that have now been written about the topic and have replicated the study, have done it with different methods, have done it in different industries, and whatnot. And a few years later, the FTC in the US held an entire day of public hearing on that topic. And then regulators worldwide also became acutely aware. So both in Europe and various individual European countries, jurisdictions in Southeast Asia cases were fought that relate to this topic, the Australian Parliament held hearings on the topic to try to figure out how far common ownership by investment funds is also a problem among their publicly traded companies. So it has made quite an impact. I gave a talk at the White House Council of Economic Advisers. So politicians definitely paid attention and are aware of it. Policy proposals have been made. But in the end, of course, it's also a political choice what to do about this, if anything. Different countries have moved differentially fast on that one.

SPENCER: So what would you like to see done? What are some of the interventions you think might be helpful to help protect consumers?

MARTIN: Four years ago, what I strongly advocated for is that the regulators should collect data and make it available for researchers simply to shed more light on the issue, and simply make it more transparent who owns the firms that they regulate, so that they could convince themselves by doing own analysis on what the issues are. And the databases are just terribly bad. And none of that happened over the last four years. So in the end, we ended up doing it ourselves and just posted a new paper, actually. That then also makes the data set available soon, so people can take a look at it and make that whole thing more transparent. So transparency is the main thing that I was hoping for, as opposed to direct interventions. But, in the long run, we'll have to think about direct interventions as well. At this point, Vanguard controls more than 10% of the shares of the average S&P 500 company. And they are on an exponential growing trend. And that's very understandable. I am one of the reasons. I'm a big fan of Vanguard in my identity as an investor, and I let them manage my money, my retirement assets, my my savings, because they offer a great product. I'll be pointing out to the researchers that this may have a side effect or problems associated with it that people have to pay attention to. And if that growth continues, the largest three asset managers are soon going to hold 30% of the votes of all publicly traded companies, or most publicly traded companies in the US. And the notion that this would not affect at all how the economy functions is just becoming increasingly absurd with every day that we're waiting about doing something. And this becomes particularly absurd, because there has been a debate 80 years ago in the US about precisely that problem, that when investment funds, instead of just offering diversification for investors, start to influence or even take control of industries that this is not in the public interest. That has literally been known for 80 years. And rules have been written to prevent it. But they've either not been enforced, or the rules were written in such a way that allowed the industry to circumvent them. So really, that's why we're saying it's a matter of political will, rather than understanding that there's a problem, whether and what they're going to do about it,

SPENCER: That's wild how much concentration of ownership there is. I do wonder, though, whether an organization like Vanguard might be less of a problem, because Vanguard has their own owners, essentially, that are scattered across such a huge number of different people who are essentially owning those shares. Whereas maybe with other organizations that own large amounts of wealth, they're sort of more concentration of ownership. What do you think about that?

MARTIN: So I very much understand it. And, as I said, I'm personally a Vanguard investor. And I don't feel like I should be implicated in somehow lessening competition among US firms. And in some ways, it's also not the presence of Vanguard that is the problem, but more like the absence of somebody who is even more powerful and not at the same time holds stakes in competitors as well. The problem is just that when a greater and greater fraction of shares in companies gets encumbered by these large mutual fund complexes, then it becomes hard or impossible for shareholders that hold concentrated stakes in one firm to acquire the stakes, run activist campaigns to drive innovation, make management more efficient, reduce costs and so forth. And I think you and I will probably agree that it is close to absurd to think that Vanguard could play the same role at Tesla that Elon Musk is playing at Tesla. So, again, the idea is, if you get rid of Elon Musk at Tesla, then how would Tesla continue to run and operate five years into the future, if they're essentially controlled by Vanguard? So in some ways, the problem is that Vanguard does not actively control companies in the same way that an investor would, who's equally powerful, but who does not at the same time own large financial stakes in competitors as well. So the notion that Vanguard is passive and has all these small investors, and therefore they don't intervene with firms, that's kind of the problem description, rather than an excuse, or an explanation of why that wouldn't happen.

SPENCER: I'm confused, though. Because if they don't intervene — I don't know for sure. You could imagine maybe they do try to intervene. But if let's say they don't intervene — because they themselves are just passive investors, mainly with lots and lots of their own investors investing in their ETFs or whatever, then those companies would just be effectively managed by whoever runs them. I don't quite understand the problem.

MARTIN: That's right. So the question is just how much pressure does that management have to reduce costs and increase the value of their particular company, as opposed to enjoying the perks of the office, make sure the company doesn't go bankrupt. But beyond that, not particularly innovate or cut costs or go out of their way to attract customers from rivals. So again, let me try to illustrate that. A super innovative airline in the US existed, it was called Virgin America. And Richard Branson, the largest shareholder, obviously, the driver of that operation, puts on lipstick and plays flight attendant for a few days in order to generate PR. And you can't just not see Warren Buffett to do the same — maybe perhaps for personality reasons — but you certainly can't think of Vanguard doing the same, and first doing it in United Airlines, in the second day doing it in Delta Airlines, and the third day doing it in the third airline. That's just absurd to try to steal market share from rivals and then steal it back on the other side. So that hasn't happened. But that is what would have to happen for capitalism to work the way it's supposed to in the textbook.

SPENCER: Going back to the data, in your original study that kicked us all off, what did you actually show with empirical data?

MARTIN: We collected data on the ticket prices that particular airlines charge on different routes. So, say, Delta Airlines on average in the first quarter 2011 on the route New York to San Francisco charged which price for some average of tickets. So we have that data. And then we started correlating that. That changes in the ticket prices on these particular routes flown by a particular airline, with the changes in the ownership structure of the airlines flying that particular route. And then what you observed is that when the airlines that fly that New York San Francisco route, if they become more commonly owned — because some investor buys a whole bunch of stakes in them, for example — then ticket prices go up subsequently. Whereas if common ownership goes down — perhaps because some activist investor buys a large stake in one of the airlines but not the others, so that reduces the extent to which the most influential investor owns the financial stake in the rival. — So when common ownership goes down, the price does go down on that particular route. And the reason we did that is because that enables you to kind of cleanly identify the effect of these ownership changes and difference out other effects of stuff that happened in the world. So people might say, "Well, yeah, but oil prices went up. So naturally, ticket prices went up over the same time period that common ownership went up." But that's kind of differenced out because oil prices also went up on the New York to Seattle route, right? So what you're comparing is really route level changes in ticket prices over time as a function of how the ownership structure of the airlines flying the particular route change. So that's pretty nuanced than complicated. I hope that's kind of clear. And then the claim to fame of the study was to show that when BlackRock acquired in 2009 another asset manager called Barclays Global Investors, which is really where all that index business sits, that the changes in ownership of the airlines that were caused by that merger, based on these changes in ownership, you could predict changes in airline ticket prices, again, at the root level in the US a few years into the future. And, that becomes increasingly hard to explain with other theories other than, well, that's because of common ownership. So that was what that study did.

SPENCER: So was it essentially a regression discontinuity technique where you're kind of zooming in at the moment when the ownership changes and seeing if the price just before the change and the price after the change are different?

MARTIN: Kind of. Very close. It wasn't literally a regression discontinuity technique, but it was another technique and this family of modern empirical methods. It was a difference in difference design, as well as an instrumental variables technique. But yeah, it's like the very similar idea as what a difference in difference designs will do.

SPENCER: So you have an instrumental variable. So you have some variable that's effectively random that changes the outcome of interest. Do you want to just explain like what you were using there?

MARTIN: This is like an economic seminar here. I'm deeply impressed. What happens is Barclays Bank goes through the financial crisis and kinda runs out of capital, And also, capital regulation changes, and it no longer is opportune for a bank to also own an asset management company. So they put up Barclays Global Investors for sale. And long story short, BlackRock ends up being the buyer of the firm. Now, the point is, that this acquisition probably doesn't have a whole lot to do with the US airlines, which are a miniscule fraction of the portfolio of both firms. And certainly, it doesn't have anything to do with cross sectional differences across US airline routes with expected changes in ticket prices, right? So that is the intuition of why when would come to view the ownership changes that result from that acquisition as exogenous or as quasi random, and it's having nothing to do. So the idea is that as in a medical trial then, you effectively randomly assign more common ownership to some US airline routes than to other US airline routes. And then you see whether those routes that were treated with this pill called common ownership, were the ticket prices changed more in those routes compared to the others. So that's the idea behind that identification strategy.

SPENCER: Right. So even though it's not truly a random assignment, like it would be in a medical trial, it's effectively random enough that you're treating the sort of shock of this deal occurring as essentially like randomly assigning who owns what route. And that lets you treat it as though it were effectively randomized.

MARTIN: That's exactly right. So economists call that quasi random assignment. That's exactly right.

SPENCER: So switching topics, you had this idea you talked about, which is that regulation is the job of policymakers, not investors, central bankers, and corporate managers. What do you mean by that? And could you elaborate?

MARTIN: Yeah. It just seems to me that there is a bit of a zeitgeist, I suppose, some sort of social movement of beliefs that asset managers (central banks) are suddenly in charge of rescuing us from all kinds of social problems, like climate change, diversity, and whatnot. And I just don't think that's going to go over very well because better solutions exist. And, we should emphasize those rather than distract us from these much better solutions by just outsourcing essentially government to a bunch of unelected either asset managers or unelected central bankers.

SPENCER: Can you give a couple of examples of what you're talking about?

MARTIN: Yeah. For example, climate change. The climate keeps changing, it gets warmer, corporations raking record profits at the same time, real interest rates drop, inequality increases. So those are all challenges of our time. And what one sees in the newspapers in the op-ed is that the culprits are very easy to find. So the greedy managers are there and investors, of course, because they don't care enough about workers and the environment and social cohesion. And the solution is then identified quickly as well. What we need is that the corporations adopt a social purpose...for all firms. And who's better equipped to control that would be the large asset managers who claim that they're universal investors and have a stake in the overall economy. And that then quickly turns into a sales pitch, where the asset managers try to convince the public to buy their ESG funds. ESG stands for environmental, social, and governance. And so those tend to be funds that claim to select only companies into their portfolios that perform particularly well along one or all of these three dimensions. So, those that are particularly good or not quite so harmful to the environment, or have social issues on their agenda, or improve governance in some way. The problem being that these outcomes are not usually actually measured. And it's not clear that investing in these funds has any sort of actual effects on these desirable outcomes. But they seem to work really well as a marketing device. They charge higher fees. And, somehow we are to believe that this isn't going to solve the social ills. And I just don't think that's going to work for a variety of reasons.

SPENCER: Okay. So if I understand correctly what you're saying, you're making the argument that we have all these social ills, and that people want to point to, for example, companies that are causing environmental damage, or they're causing issues with social equity or justice or whatever. And then they're saying, "Okay, well, these companies are behaving badly. So we need companies to not just try to maximize profit, we need them to adopt your social mission as well. We should ask people who are investing to invest in companies that are doing better, rather than worse things." I think you're saying that this is just not a good approach to solving these problems. Is that right?

MARTIN: That's right. That's the claim.

SPENCER: Okay, so then can you elaborate why is it that you think these are not good approaches?

MARTIN: So the reason I think those are not good approaches is because the incentives of these actors are simply not well-aligned with society's interests. The universal investors, or some of them, claim that they own a stake in the economy, but that's simply not true. What they own a stake in is a bunch of the producers in the economy. But they don't have a stake in the consumers. They don't have a stake in the workers. They don't have a stake in the environment. So it is not exactly clear why, if they act in their own interest, the asset managers would care about displaced Bangladeshis that have to flee rising sea levels. They don't have a stake in those, right? So at least they don't have a financial incentive to internalize the externalities that the producers impose on these populations. But nevertheless, they kind of market that and their products as being solutions to these problems.

SPENCER: Well, I think some people are using a mental model along the lines of, if we pressure the people who lead companies, we can get them to care more about social outcomes and the environment. And that pressure might be social pressure or making them look bad. It might also be not investing in their company or asking people to divest etc. And so I think, If I'm trying to steelman their side of the argument, I think it goes along the lines of, "Well, these are run by humans at the end of the day, and humans are susceptible to pressure. And additionally, a lot of people do care about more than just maximizing profit, even if they lead a company."

MARTIN: That's fair. I'm also not trying to argue that there's absolutely no effect of pressuring corporate leaders to pay more attention to these issues. I just don't think that it will lead to anywhere near enough of an effect to actually have a significant effect on these issues. But the cost of it is a huge distraction from the actual solutions to it. So let's have an example. Let's take climate change. So, let's say as an asset manager, we just go and divest from the brown assets, the coal producers, and so forth. So the first problem is that only the ESG funds divest, but the other funds issued by the same asset manager don't invest at all. They might actually issue another fund that buys precisely the shares that the other fund divest and offers it to investors who don't care, or who don't care about rescuing the climate, or who don't think that this is a proper approach to it. Now, the second problem is related, which is that if one shareholder sells the shares of the coal companies, well, that reduces the price of the coal company, but taking the dividends that come out of the company is given, that actually increases the returns. So you only have to find one investor in the world, who does not have the social preferences and will be on the other side of the trade and buy the stock and enjoy the higher returns that all these responsible green investors have caused. So the main result of this exercise is then redistribution from investors that have a conscience and try to help the world by divesting on to investors who don't have that kind of moral constraint on their behavior. And that would obviously not be the desirable solution. And I call that a distraction from the actual solution. Because while we're all busy looking at advertisements for ESG funds and trying to convince central banks to buy only green bonds in their quantitative easing portfolios and whatnot, all that time we're not spending pressuring politicians to actually write laws, which are the proven instruments by which we would solve the social ills. There's what one would call a consensus among economists on this planet that the most efficient way of dealing with climate change is by having a price on carbon. And what we're not doing is pressure politicians to implement that and to actually price carbon. And meanwhile, we just take this, what I think of as just like a cheap way out that makes us feel better. But that doesn't actually change anything, which is, I'll just shift my retirement money to green fund that makes me sleep better at night. I'm even willing to pay a higher fee for it. My concern is just that this will not actually solve the problem that we're after. And what I think is at the core of that is the different ethic. Economists have often come across as kind of like immoral actors, because they have a utilitarian ethic. They care about the outcome, but not about the way you get there. And many, and perhaps most people in the population, don't think about it that way. They want to take moral acts or something that feels right, even if it doesn't lead to the desired outcome. And I think it is exactly that conflict between different ethical approaches that is being illustrated by my points here about, "Hey, we should stop doing things that feel good, but we should start doing things that actually have an effect."

SPENCER: I think I mostly agree with you about the ineffectiveness of buying into ESG funds and investing in more socially responsible companies and divesting from other companies. However, I do have some caveats to that agreement with you, and I want to run them by you and see what you think. So imagine that you're thinking about a particular good or service and you decide not to buy it, or maybe you own it and you decide to sell it. Classic supply and demand theory tells us this is going to push the price down. So if we apply the same reasoning to, let's say, divesting from companies that have bad behavior, we should expect by that argument that that will push the price down. Now, the price is going down. Well, yes, someone else is going to own those shares. Like if you own the shares of a company and you sell them, someone else is going to own them. But, you will be pushing the price down a little bit. And companies do care about their price. Empirically, they seem to care about their price. They seem to spend a lot of time and to try to keep their prices up. But also their prices have meaningful effects for them. For example, the higher the prices, they can sell their shares at those prices if they want to raise capital. Borrowing depends on share price in different complicated ways. So I'm just curious to hear your reaction to that, like, oh, actually, this will affect prices and companies do care about prices. So it should be some kind of disincentive to do bad behavior.

MARTIN: You are right about that. If a sufficiently large fraction of the market starts to shun these companies and actually manages to push the price down, I'm absolutely with you that companies will care. The question that research has not quite conclusively answered is how big that fraction of the market would have to be. And you must not have somebody on the other side of the trade, who is able and willing to push the price back to where it otherwise would have been. And the research on this issue is to date not exactly conclusive, in terms of whether companies that have been punished by divestment campaigns actually have higher cost of capital (which is the story you tell) or not. It's inconclusive in a funny way. Because you find some papers that say, "Look, green investing works. The cost of capital is higher for the companies that have been divested." And other research said, "Look, green investing works, because the green funds (ESG funds) have higher returns than the brown funds." Well, look, one of these statements cannot be true because the cost of capital of the firm is just the flip side of the returns to the investor. So if the divestment campaigns work, in the way you just very clearly described, then it must be true that the brown investors, the brown funds that don't divest, are those that earn the higher returns. And the green funds are those that own the lower returns, at least in the long run. In the short run, you could get, of course, that there's like a movement that pushes up the price of the green companies, and therefore also the returns of the green funds. But at least in the long run, you wouldn't expect to see high returns for the green funds. So what people would have to accept is that if they want to invest in ESG funds, that it comes with lower returns, as opposed to higher ones. And that is the price they're willing to pay to perhaps improve the world. But believing at the same time that they're going to get higher returns out of this, and to believe that they're going to push up the cost of capital of the brown companies, those who believe that are a little harder to reconcile. And I admit that what I just gave here is a bit of the cartoon version, a bit of a simplified version, of the argument, and there's nuances and challenges to it. But I want to just get across that you can't really, at the same time, believe both of these. And similarly, when your bank or financial adviser pushes a green fund on you and tries to explain that both it will solve climate change and it will make you rich, then probably one of these statements is at least not quite backed up with evidence.

SPENCER: I'm not sure I see the contradiction, though. Because let's say that large numbers of people divest from companies that pollute a lot, that would, in theory, increase their cost of capital causing them to perform worse, which means that all the other companies, if you're measuring it relative to, let's say, the S&P 500 (or something) would perform better now just on a relative basis,

MARTIN: I see. Where I stumble in this example you just made is that if they have a higher cost of capital that makes them perform worse. Higher cost of capital can mean that they perform better. So, a higher cost of equity means the company has to pay more dividends to a shareholder for the shareholder to be willing to hold the stock at $100. Let's make a little calculation here. So, if a stock costs $100, and the company pays $10 of dividends every year, and at the end of the year, the stock is still trading at $100. That makes for a 10% return. If the stock price gets pushed down to $50, and it still pays $10 of dividends a year, that makes for 20% return. So the firm's cost of equity capital has gone up from 10 to 20%. But that just translates to a higher rate of return for the equity holders. So the firm's cost of capital going up does not necessarily translate to a worse performance from the perspective of the shareholders and the owners of the enterprise.

SPENCER: I don't know if I buy that because, first of all, which shareholders are we talking about? The shareholders who owned it previously have gotten really screwed, right? Because the price just went way down. You're talking maybe about new shareholders that are buying it after the fact, after it's already been punished?

MARTIN: Yeah, exactly. So that's right. So that's what I meant by saying, in the short term, you can get these effects that the green funds perform better, because while the price is moving, they benefit and the investors in the brown company, they lose while the price is being pushed down. But then looking forward, the expected returns from holding a stock at these now depressed valuations, that means that those investors actually have higher returns. And the idea is precisely that right. So you increase the cost of capital for the firm, which means they use higher discount rates to evaluate the product and then hopefully they do fewer of these climate destroying or environmentally unfriendly projects. That's exactly the theory of how this is supposed to work. All it is saying is that this only works if the shareholders that remain in these companies then get really rich In the process, because they get higher rates of return on their investment that are not justified by say the risks that they're taking.

SPENCER: So something that confused me about this whole topic is how we should think about the pricing of stocks. Because let's suppose that the way stocks are priced is that everyone was perfectly rational and just considering every stock as like a discounted flow of cash flows in the future. If that were the case, then an individual investor divesting from the company, you would expect it to have an extremely minimal impact on future cash flows of that company because they're just taking their shares and selling it to someone else. Now, there are some edge cases where it could, in theory, affect the cash flows, like if it pushed down the price enough to screw up their borrowing agreements or something like that. But I think in most cases, we can say it's gonna have very little effect on the cash flows. So then, even if the price temporarily dipped due to this divestment, you'd expect it to come back quickly to reflect this kind of future cash flows. Right. Do you buy that reasoning?

MARTIN: Yeah, that's exactly right. But in that case, the cost of capital of the firm hasn't changed, in which case, the divestment campaign has caused a temporary blip in the stock price, but has really not made it more difficult for the firm to access the capital markets, and therefore will not change anything about its operations. So that's exactly the point either the stock price permanently changes, in which case, after that price adjustment is done, the shareholders get higher returns. Or, it does not permanently change, in which case the firm's operations won't change. So that's the basic idea.

SPENCER: Right. So that would be like in a perfectly rational world where people only care about cash. But then, if we consider another world where there's this thing of like bad PR, or hype, or we consider all these kind of social factors, you can imagine that it might not actually work like that. Like, it could be that once there's a big divestment campaign, now the media is writing about this company all the time and how much they pollute, and this actually has a lot of real world repercussions. It hurts their brand, it makes people less want to buy from them, less want to partner with them. So then in that world, I can actually see this mattering a lot more.

MARTIN: Okay, so that's fair. Let's take an example. So I can totally see that if there's a negative PR campaign and people don't buy the product that this works. I can see that working much more easily than in the capital markets. But let's take a concrete example. So we find a whole bunch of university endowments that divest from the oil companies. So now we went through the logic of whether or not that is going to make it more difficult for the oil companies to access capital markets and to drill new wells and to take out more oil from the atmosphere. And then we said, "Yeah, maybe it's not through the capital markets. But it's just that the media is going to make people aware that all companies are bad, and people will buy less oil at the gas station." Well, yeah, that part I can see that when people buy less oil at the gas station, then that is going to lead to less pollution. I agree. I just don't think it has much to do with the divestment campaign in the capital markets. But it has more to do with whether people cut back on their actual consumption. And that's why I'm saying that the capital market part is probably more of a sideshow and addressing the part of the real economy is a much bigger one. So yes, if journalists are writing about how we should drive cars less and it changes public opinion, and we all ride our bikes to work, that is more likely to have an effect. They are not the most efficient way of doing it. It would simply be to tax the gas more that will ensure also that people will buy less gas at the gas station. And we don't have to go through these complicated mechanisms of the capital market. Or if then people pick up on the divestment campaign and convince the public that way through the backdoor to buy less gas.

SPENCER: I think I mostly agree with you. And I think if I had to summarize my own view, my concern is that divesting from companies, insofar as those companies are just treated as kind of cash flow machines, essentially, is just putting more money into the pockets of the people who don't divest. And I think, mostly it's not having a huge effect on prices in the long term. I think maybe it has a short term dip in the prices, but then I don't expect that to persist. I think at the extreme where you have really large investors doing this divestment, then it actually might really start to matter. It could actually materially impact the company. For example, it might cause them not to borrow money or not to raise capital and stuff like that.

MARTIN: Yeah, that's right. I think we agree on that one.

SPENCER: Cool. Okay. So then let's talk more about what you want to see happening. What are some of your other thoughts on how you want to see policymakers dealing with this? You mentioned a carbon tax.

MARTIN: Right. I don't know how far I literally mean that ,because obviously, people have tried that. And so far, we haven't reached an agreement on that. My concern is really to shift people's attention in the right direction. And while people's attention is captured with, "Oh, look, we found a solution to that we're just going to make the Fed take care of these problems or a big asset manager by issuing more green bonds. And therefore now I don't have to pay any attention anymore to pressuring my congressman." But that will make things worse on the margin. So if we just put attention back to where the power actually lies to solve these problems, which is politics, pressuring the politicians more to do the job in the interest of the population, that is just more broadly speaking, what I wish would be happening. Now, whether that literally leads to a carbon price, or at least it leads to not subsidizing fuel — so there's nuances there, so we don't have to go all the way to one global carbon price and the economist ideal — but if it does go in the right direction, in terms of where we put our attention, that is what I would like to see.

SPENCER: I suppose if we're going to be really cynical, similarly to the way you make the argument that companies don't have the incentive to solve these problems, and so you're kind of fighting against their incentive if you're trying to get them to solve it. You might also argue politicians don't have an incentive to solve these problems. And so you're actually fighting against their interests to get them to solve it. Because the reality is, most politicians aren't going to be there for that long. And climate change is this really long term phenomenon. Plus, it's a tragedy of the commons where if one group starts polluting less, they only see a small fraction of that benefit. So I'm curious to hear your thoughts on that in terms of, do politicians really actually have an incentive to solve these problems?

MARTIN: No, I think you're right to point to that issue. I suppose I'm more hopeful that if people write newspaper articles about the way in which a particular politician has voted after receiving large lobbying donations from a particular interest group, that this would have a much bigger effect on the politician's future behavior. Being pulled through the dirt in the press for a behavior that goes against the interests of the constituents, then if somebody writes yet another article about how somebody got rich by buying a coal company, so I just have a little bit more hope that politicians would be more susceptible to the kinds of pressures that you previously outlined you were hoping company executives would be susceptible to.

SPENCER: Right. At least politicians have to get reelected, whereas rich CEOs of companies, well, they can kind of just say, "Screw it, I don't really care that much what the people think."

MARTIN: It depends on who they get elected by. So those people, you and me, the citizens, they're the ones who elect the politicians, but we're not the ones who elect the CEOs. So, we can put lots of pressure on the CEOs, but we have no power over them, as opposed to with politicians. That's very different. Who has the power to vote CEOs out of office, or more precisely speaking, the directors who then vote the CEOs out of office? Those are the large shareholders. But the fun fact, the big, diversified universal investors are those that are least likely to vote for ESG proposals compared to other investors. So that's why I have very little trust in that they would offer the solutions to this problem rather than kind of considering part of the problem.


SPENCER: All right, so switching now to our final topic, let's talk about AI. I think we're gonna have some interesting disagreements on this one. But do you want to tell us your opinion on what the AI revolution is really about, and what are people getting wrong about this?

MARTIN: What I think the AI revolution is really about is mainly to make faster, cheaper and better predictions—predictions of generic relationships between stuff, like shoe size and body length, or my browser activity and my mental health or whatnot. So these better predictions of generic relations where the past very well predicts the future. So this is what I think is at the core of the AI revolution. Now, if I use these predictions to make automated decisions, then it basically becomes an AI system. So basically, what I just described is just a fancy way of saying statistics. And so, sometimes, but not always, statistical prediction isn't paired with automated decision making. And this ingredient, this very simple story I'm telling here just completely, fundamentally changes how firms and economies operate. So that's enough. That is by contrast to saying that computers start to think like human beings, and they imitate human thinking, which is something that you can read about in the newspapers a lot and discuss and when the singularity is coming, and the robots are taken over. But I think while we're entertaining ourselves with these hypotheticals, the Earth is kind of like changing under our feet. And people who really want to be part of this revolution and benefit from it, they would be well advised to do like this lowbrow tech that is actually already working and already changing companies and industries, rather than engage in the speculation about what the future might hold in terms of a quasi human thinking.

SPENCER: So there's a lot there to unpack. So let me ask you some questions. I noticed that you want to basically say, "Well, what is AI? It's really just statistics." I do take some issue with that. And I think I take issue with that for two reasons. The first is that I think statistics is a different field than machine learning in a meaningful way. I think of statistics as the study of how to test a hypothesis, whereas I think of machine learning as a study of how to make predictions based on data. I don't view them technically as the same, or maybe you mean in a looser sense.

MARTIN: I 100% agree with your much more nuanced and final definition. And that's exactly right. I just meant it in a loose way.


MARTIN: You take data and you compute a number out of the data, and that's it. It just seems that a lot of people who are a little less close to the field have this idea that you just tell the computer, "Hey, take all the data in the world and do some thinking with it like a human," and then the computer behaves like a human or something. Whereas on the ground, it's really just much more similar to traditional statistics, where a human being decides which data to use, which model (which statistical or machine learning model, in this case) to apply to the data, which parameters to choose, and then what to do with the output. So that work process is much closer to how predictions in traditional statistics work, or simply predictions with machine learning, and has much less to do with robots going wild, basically.

SPENCER: So I guess you're trying to point at the role of humans in this process of deciding what algorithms to use and in deciding sort of exactly the parameters of these systems and so on. I took you saying, "Well, it's just statistics," as basically saying, "Well, it's just a bunch of numbers being crunched." And my problem with that is that I'm not confident our brains are not just statistics, by that definition.

MARTIN: Huh...I do not take issue with that either. [chuckles] I agree with that as well. But our brains seem to be much better at crunching a particular type of numbers they're not (the computers are not), right?. So that's why we do captchas. And so there's some problems that are extremely simple for humans to do and extremely difficult for computers, at the same time. And obviously, what the space where you want to be in is to do the exercises that humans are good at and that complement what the computers do, as opposed to putting yourself in a field that computers are soon going to be better and cheaper and faster at doing than what the humans do. And just to know which human task or which task which occupations belong in which of these categories, for that one has to understand a little bit better the engineering of what the machines actually do and what they don't do. And for that, going through some of these number crunching exercises is actually quite useful.

SPENCER: I think one of the tricky bits here is that what machine learning algorithms are capable of has actually changed a lot even over the last five years. Like, I don't think that five or six years ago, we could get a computer to, for example, generate a photorealistic face of someone that doesn't exist, and now we can. Or, that they maybe could generate simple music 10 years ago, but now AIs can actually generate pretty sophisticated music, and so on, so forth. So, it's a bit of a moving target as we keep getting them. We keep getting these "statistical systems" to do more and more miraculous stuff that, increasingly, is in the realm of stuff that we only used to think humans could do. And that sort of every year or two another thing follows and we're like, "Oh, only humans can do X." And then like, "Oh, wait, now actually, the best in the world is a human anymore."

MARTIN: Yeah, I agree with that characterization as well. That boundary is moving, and just predicting in which domains that boundary is going to move next, and then creatively coming up with tasks and jobs and business models that complement this development. That is a fundamentally human endeavor. So I say to my students, one prediction exercise that computers are really bad at is those where the past does not predict the future. So when a particular relationship is not found in the past data, then it's very hard for a machine learning algorithm to make good predictions. And this is where humans are comparatively good. So when there's not a lot of data, and there's no generic relationships, that is when humans come in. And the AI is just not as good. So let's probably work on an example here. So fun fact, for the last five years, I told my students, "Suppose you run a quantitative hedge fund. So the quantitative hedge fund digs through past returns and all kinds of data from the past and makes automated trading decisions based on that data. And now you see that it might be that there's going to be a military conflict between, say, the US and China or the US and Russia. Now you decide whether to let your computer run or where do you kind of make the algorithm stop, because you figure there's no way the computer can figure out what will happen in this event, because that's never happened in the past. And therefore, you cannot expect a computer to make reasonable or good decisions in this completely non-generic scenario." Now, that deliberation of whether or not to shut off the computer, that's the deliberation humans are going to make. That's the deliberation humans are going to make that knows really well what the computer knows or can know, and how it would behave. But it's the human judgment; judgment being the key word here. That's the human judgment that machines are very far from replacing. And there's lots of analogies to that. Whether or not to let the Tesla car drive on autopilot or not, that's the human decision. So this judgment, that is a skill that actually increases in value as the simple and generic prediction tasks become cheaper and better to perform with machines. And the judgment that is related to the machines operating, that becomes more valuable, and it's a totally human task.

SPENCER: It's an interesting question, will that eventually be replaced by machines, doing that high level task of deciding what algorithms to run. I think your point is well taken that if you don't have training data, or you can't train a machine learning algorithm. And furthermore, if you do have training data, but that training data does not reflect the relationships that you need to understand in order to do something, well, you also can't use machine learning right now. Right? There has to be there to train it on.

MARTIN: You can still use machine learning, just the output is going to suck. [chuckles] So another example I use often is: it is December 2020 and you're using COVID Coronavirus incidence rates in order to predict hospital admissions. All right, that's fine. You do some machine learning exercise, you get a prediction out of that, you train a model. Now one year later, in say February 2022, if you take COVID incidence rates, and you try to predict hospital admissions with it, well, you can run the same machine learning algorithm you trained on this past data, but you wouldn't expect that it would perform particularly well, simply because the nature of the problem has changed. In the meanwhile, the population is vaccinated, it's a different strain of the virus and so forth. The point is nobody tells the machine that the rules of the game have changed, and the machine doesn't know that by itself. But a human being that knows how the machine operates and what the machine learning algorithm actually does, would know already before the fact that you should not trust the machine in running this problem, unless you've retrained it, or simply because you know that the data generating process has changed. And this kind of critical thinking about what the machines are doing. This is the valuable commodity also within the tech firms these days, where it's about interpreting the output of the machines. It is about designing experiments that allow you to make causal inferences. So you work at (I don't know) a tech firm, and you place online ads, and you want to find out whether making the ads yellow or red makes it more likely that people will click on them. How are you going to find out? You have to start designing and experimenting and that's a skill that computers are extremely far away from having, and that's the skill where human thinking comes in—human creativity, economic theory, experience and intuition and stuff like that. So the emphasis of the book is in helping people identify these areas where humans have a huge advantage over computers so far, and are just extremely unlikely to be replaced anytime soon. So that is kind of like the point of the book. The title is "The Business of Big Data" that I wrote with Uri Bram — who is a hilarious writer and much better than me — that we're trying to just put people's attention on the areas where humans have a huge advantage over computers and where humans complement computer's ability to make fast and cheap and high quality predictions. And those are the areas that will gain in value in the future, even as computer predictions become cheaper and cheaper, and therefore more ubiquitous as well.

SPENCER: Now, I know that you think that the focus in the media on, let's say, robots or AI taking over or whatever is a big distraction, and that they're actually real issues that we need attention from regulators on right now. So I'm wondering, what do you feel it's really distracting from and then what would you like to see regulators focus on?

MARTIN: So there are several areas. One is discrimination. AI holds huge promise in getting rid of human discrimination. But it takes a huge amount of human thinking and training and scrutiny to make sure that the AI algorithm doesn't actually perpetuate discrimination that humans have created in the past, and actually exacerbates it and makes it even worse. So discrimination is one big issue. And that's true in policing, in lending decisions, in pricing, in decisions who gets a loan or not, who gets to buy a house or not, who gets to see an ad for a loan that will enable them to buy a house, and so forth. So that's going to be pervasive. And I think it's a huge issue. Another area is that the tools of antitrust law and competition law are not in all cases well equipped to deal with the new challenges that the big data economy poses. For one reason, because for the longest time, people have not understood data to be a key resource, a key commodity as an input to the production process, and have not quite understood how the industrial organization changes, and what the economics are of big data. So any trust being another issue that I think politicians and policymakers need to pay much more attention to than they presently do

SPENCER: The AI bias topic, where you can have algorithms making really important decisions but in a kind of inscrutable way that might actually be reflecting societal biases, I think it's a really big topic that's kind of surprisingly hard to think of solutions for. So like, you can have AI algorithms that might produce biased outcomes. But how do you regulate that in a way that protects people?

MARTIN: It's a huge challenge, I agree with that. There's also big hope (in case I don't get to that, remind me of it). I think the first step in thinking about how to regulate that is to figure out to what extent it already is regulated. We have anti-discrimination laws that apply to, say, lending decisions and mortgages, employment decisions, and so forth. The question is how these existing laws apply when a machine does the discrimination, and who has the burden of proof of showing that discrimination has happened. So that's the first step and try to figure out how the existing law applies. And obviously, a lot of these law cases have just simply not been decided yet. So there's a huge amount of uncertainty about where the law stands on many of the issues and the law is being rewritten on a daily basis, one decision at a time.

SPENCER: So let me give an example that illustrates why I think this is such a tricky area. So suppose that it's illegal to take into account whether someone has, let's say, been convicted of a crime when you're hiring them. So suppose in this jurisdiction, you're literally not allowed to use that information. Now, that means that, clearly, when you're designing an AI, you can't put a variable in there, like, "are they convicted of a crime?" But in theory, that algorithm with whatever information you do give it — let's say you give it hundreds of pieces of information about the person — many of those might be correlated with whether someone was convicted of a crime. And so what you might find then is that the algorithm might be actually hiring people who've been convicted of a crime much less often than others. And now you're tasked with the question, "Well, is it discriminating or not?" Technically, it's not using that information. But it's sort of maybe inferring it through kind of second degree information, but in a way where it's very subtle, it's like hard to even say if it is or if it isn't.

MARTIN: Yeah, that's exactly right. So the former is called disparate treatment. If you treat people differently. The other one is called disparate outcomes. So exactly the way you describe it, but just because you didn't intend to discriminate, or because you didn't do it explicitly, that doesn't mean it's legal. Again, the key issue is who has the burden of proof, whether it is, in this case, a company that has discriminated against a potential employee, or whether it's the employee that has to prove to the company that discrimination happened. And of course, having access to the data that is being used and to the algorithms that are being programmed to analyze the data is a huge advantage in being able to prove the case one way or the other. And obviously, the companies are at certain advantage here over potential plaintiffs outside the company to be able to do that. So that's why that matters a lot who has the burden of proof of showing that. And, there's also procedures inside companies now to try out various different algorithms to see whether, for example, one of them serves the same business purpose of, say, getting rid of asymmetric information about the probability that loan gets repaid. So, at the same time, discriminate across these relevant dimensions for the business, but at the same time, not to be unnecessarily discriminatory along these protected characteristics. I spend a lot of time with my students at Oxford going through these different methods and arguments to try to find how to prove or disprove that there has been such unintentional discrimination that you just described.

SPENCER: I think if you define discrimination as unequal outcomes, if you said, "Okay, the law declares that the percentage of all people who apply for this job that have been convicted of a crime versus those that haven't, they have to be accepted to the job at an equal rate," then it's sort of a straightforward thing to measure. You can just say, "Okay, well, those that applied, how many were accepted in the groups of those who have been convicted versus haven't." But I think for other definitions, it becomes extremely hard to measure, because I think it's just not even clear what it means to discriminate. Let's say people who have been convicted are hired at a much lower rate, and they could say, "Well, the algorithm didn't have whether they're convicted, the algorithm, just looking at hundreds of variables, and deciding this person is a less good candidate for lots of lots of different reasons. Maybe hundreds of factors are contributing a small amount to that decision." What does discrimination even really mean in that context?

MARTIN: That is part of what the challenge is. So, there's some cases where it's clear. Say, suppose it was true that credit rating or a credit score was the only relevant predictor of loan default. (Let's just go into this theoretical world where that is true.) But then you find that for a given credit score, people of a particular race are much more likely to get a loan than people who are not of that race, or they get a loan, but at lower rates. Then you would say that there's probably discrimination happening, because you can just see statistically that there's a systematic bias in one way in one direction or the other. But yeah, there's a myriad of cases where it's not clear at all that such a showing can be made. And it's really perhaps even more the exceptions to the rule than the rules themselves where such a clear definition is possible and one can measure it. As I say, it's a big measurement challenge. Now, the hopeful point that I wanted to say previously (and am reminding myself to make) is, at least, the data exists. So in the past, if you felt that the loan officer of the bank around the corner was discriminating against you based on your race, what were we going to do about it? There's basically no point, no hope in proving that there's a systematic bias in the decisions this loan officer makes for people of a given credit score, but you know, does gives people the one race lower interest rates and people of the other race or simply declines loans at a much higher rate in the first place. But nowadays, at least there's a hope that if there's enough willingness on behalf of the courts of lawmakers and the actors involved, that one can test using the data that now is available to see if discrimination has happened. So I'm quite hopeful that the ubiquity of data actually will lead to a lot more scrutiny along these dimensions. And that lenders they pay much more attention to potential discrimination in their decisions. Employers may pay more attention to it, simply because the data exists to actually examine whether unintentional discrimination is happening. And in the past, the data was simply not there to prove these points. So that talent is actually becoming less challenging over time, as more data becomes available. That's a bit of a hopeful note.

SPENCER: Martin, thanks so much for coming on. This was a fun conversation.

MARTIN: Yeah, it was a lot of fun for me. Thanks very much for having me again.


JOSH: A listener asks: What's your take on the information ecosystem at the moment? And how do you ensure that you're getting information from good sources that are trustworthy?

SPENCER: My take is that on politicized topics, it's very hard to trust any one news source, unfortunately. News organizations largely have become political. On topics that are not politicized, I think they tend to be quite a bit more accurate. News organizations often make mistakes, but they at least are much less biased and are trying to get reasonable, accurate takes. On political topics, I think if you really want to understand them, it really is helpful to read news that comes from multiple sides. For example, there's a newsletter I subscribe to called Flip Side. Every day, they have one news story, and they present information from a left perspective and a right perspective. And they sometimes also include a libertarian perspective. I also really like it because it's just like one minute a day, or two minutes a day, so it's really fast. So I appreciate that. I also honestly get a lot of news just from people I follow on Facebook or Twitter. And so really trying to think about who are the people that do a good job of synthesizing information that are skeptical and thoughtful, like they're going to have a compelling take, a well thought out take on what's happening that's not just going to follow the same biases that everyone else's.




Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: