with Spencer Greenberg
the podcast about ideas that matter

Episode 135: Anti-interoperability, vendor lock-in, and high switching costs (with Cory Doctorow)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

December 8, 2022

What is interoperability? What counts as "unauthorized" access to computers or parts of computers? If the rendered design of a web page is copyrighted, then does blocking ads on that page count as copyright infringement by creating a derivative product? Does Facebook really want what's best for its users? Is Google evil? Could blockchain-based solutions provide much-needed privacy or interoperability? Why doesn't the U.S. government (for example) fight harder to prevent vendor lock-in when buying goods and services? Which tech companies, if any, should be broken up?

Cory Doctorow ( is a science fiction author, activist, and journalist. He is the author of many books, most recently Radicalized and Walkaway, science fiction for adults; Chokepoint Capitalism, nonfiction about monopoly and creative labor markets; In Real Life, a graphic novel; and the picture book Poesy the Monster Slayer. His latest novel is Attack Surface, a standalone adult sequel to Little Brother. In 2020, he was inducted into the Canadian Science Fiction and Fantasy Hall of Fame.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Cory Doctorow about big-tech, interoperability, and the legal landscape around software modification.

SPENCER: Cory, welcome.

CORY: Thank you very much. It's nice to be here.

SPENCER: Today, we're gonna talk about a topic that's just of ever-increasing importance, which is the role of large tech companies and technology more broadly in our lives. And I think you have a lot of interesting opinions on this and about what do we do to help prevent catastrophe. I've talked in this podcast before about other types of tech catastrophes, like the changing landscape of AI. But today, we're gonna talk more about the way technology is affecting more and more different aspects of our lives, and also, especially, the role of large tech companies in that.

CORY: I think that there is a very urgent question about how we're going to relate to the technology in our lives, and who's going to get the decision about how it works and what it does and who it does it for and who it does it to. And for the record, I don't think we have an AI crisis except for the crisis of people credulously thinking that if we do enough statistical inference, eventually it will be intelligent, which is about as plausible as if we do enough horse breeding eventually it will turn into an internal combustion engine.

SPENCER: Well, maybe we can dig into that at the end. I'm not sure you and I agree on that, but that's definitely a good topic. So to start off, what is interoperability, and why does it matter?

CORY: Interoperability is something that is very common. You can put anyone's shoelaces in your shoes, and you can wear any socks with them. You don't have to use Nike shoe laces in your Nike shoes, anyone's tires go in your car, you can pour your orange juice into anyone's glass, and so on. And with computers, interoperability is a lot more salient and a lot more universal because computers are universal machines. The computer is a device that can run all programs that we can express symbolically. — We don't know how to make almost universal computers; it would be great if we did. It would be super awesome if we could figure out how to build a computer that was only a printer and couldn't also run malware. But the reality is that your printer is just a computer in a fancy case connected to some ink, and it can run all the same programs as your desktop computer, which can run all the same programs as a singing greeting card, which can run all the same programs as the embedded processor in your little webcam, albeit some of them will run those programs much more slowly. — And what that means is that whatever product or service you're using now can be plugged into a new product or service that you might want to make. And sometimes we do that deliberately. So we'll have things like standards. So, the reason that your browser can connect to any web server, even though the company that made the browser is maybe not the company that made the server is because there are standard ways for the two of them to exchange data. And sometimes it's inadvertent, like, you can go down to the gas station, they'll have a fishbowl full of 50-cent USB adapters for your car cigarette lighter. People who made that cigarette receptacle didn't anticipate that you would use a USB charger with it; they don't care if you're using it, and they're not going to try and stop you. They're also not going to try and help you. And then there's, I think, the most interesting kind of interoperability, which is the adversarial interoperability or an Electronic Frontier Foundation. We call it competitive compatibility (because it's too hard to say adversarial interoperability). And that's where you plug something into something that already exists, even though they don't want you to. So this is a very old, significant, and honorable tradition—PC-compatible computers that could run PC software without IBM's permission and cable television, which started by sucking down broadcast signals without permission from broadcasters and sending them out over what they call community access television antennas, and more and more. Whether that's Apple's version of Microsoft Office, the iWork suite, which can read and write Microsoft Office files, which was made against the wishes of Microsoft by reverse engineering its files; all of that stuff is super cool, and it's what keeps companies on their toes. And it's also what lets you, the person who is using a piece of technology, decide how it works. It lets you seize the means of computations, so that if the people who designed it weren't mind readers who could also peer into the future and know exactly how you needed your technology to use, it doesn't mean that you have to compromise on it. You can just change it so that it works the way you want it to.

SPENCER: So what are some instances where this is coming to a head right now?

CORY: Oh, there's a ton. This week we heard the Facebook whistleblower speak and she said that Facebook has all of these problems, but one of the biggest ones is that whenever they discover that something that's good for their users is bad for their shareholders, the users lose. And they keep having all of these decisions in terms of the design of the system, the way that it suggests things to you, and the algorithm that makes it bad for you but good for them. And Mark Zuckerberg denounced this. He sent out an internal memo that says, “That's just ridiculous. Everything we do, we do to make users happier because advertisers want to advertise in a place where people are happy.” And so, you can kind of empirically evaluate whether this is true by looking at how Facebook treats third parties who alter its service. And one of those is a guy named Barclay, a British software developer, who made a thing called ‘Unfollow Everything'. And it's a little browser plugin that autopilots your Facebook and steps through every single thing you follow on Facebook — all of your friends, all the pages, all of the groups — and then unfollows all of them. Now you still remain friends with those people, but you no longer have a newsfeed. So then the newsfeed is what you follow your friends or your friends, and you can either selectively go back and look at them — so I can just say, “I wonder what Spencer's up to today.” And I can go and just look at your wall and see what you posted, — or you can selectively add some people. And after he did this, he did it manually for the first time. And he was like, “This is the best Facebook experience I've ever had because I just go look at the people and stuff that I'm interested in and there's no infinite scroll. And when I get to the bottom, I'm done with Facebook. And I log off, and I come back the next day. This is the way that I like using Facebook, maybe other people will too.” So he made a tool—Unfollow Everything. And his users loved it. They said, “This is the way that I want to use Facebook.” And he had some academics in Switzerland who wanted to do a study on it, where they would have a control arm that would just use Facebook regularly, and they would have an experimental arm that would use his plugin. And they will compare user satisfaction and time spent on the platform. And he specifically liked this better than regular Facebook because he spent less time there, and the time that he spent was better spent. Now, if you believe Mark Zuckerberg, then this should be something Facebook likes a lot, right? Because they just want to design a system that makes people happy, not a system that turns them into clicking zombies that they get to show as many ads to as possible. And that's not how Facebook treated this. They deleted this guy's account. They sent him a cease and desist order. And they threatened to sue them into a radioactive crater, if he ever developed anything for Facebook ever again for the rest of time. Now, that kind of interoperability shows you that Facebook is probably not a good custodian of how Facebook should work. That even if we take them at their word — maybe Mark Zuckerberg is right, and maybe Facebook only designs services in ways that they think will delight their users. And the reason that their service didn't work the way the Unfollow Everything service works is because they're really bad at it. So if they're really bad at it, you might be better at it. And you might fix their dumb mistakes. Or maybe they're actually disingenuous, and their service works in a way that makes people unhappy. And there are ways that would make people happier that they neglect and actively take countermeasures against because they're mean. So either they're stupid or malicious. But one way or the other — it would be great if you could alter it. And as a technical matter, as we see, it's actually pretty easy to alter it. And this is the kind of a thing that would be very hard for Facebook to stop, because all it's doing is autopiloting the browser. So from Facebook's perspective, there's no difference between you logging in and stepping through every single one of these things that you follow and clicking the ‘unfollow' and the ‘are you sure' dialog and so on. And a script that just drives your browser to do it would be very hard for them to deploy a countermeasure against it. So they used the law. They use the bowel-loosening threats of going up against a trillion-dollar company in a court over ‘Terms of Service' violations to intimidate people out of doing it, which is in fact what happened. He's abandoned the project.

SPENCER: Yeah, I've actually modified Facebook before by just writing a little bit of JavaScript to do what I want on top of the page. It's extremely easy to do. I'm confused, though, what is actually illegal about what he did?

CORY: That's a good question. He's in the United Kingdom. It's not clear whether or not they're referencing US law or British law. Facebook has successfully sued and even more successfully threatened many, many people for altering the way its service works, for violating its Terms of service. And their Terms of Service basically say you just can't make any modifications to Facebook. And they used a law called the Computer Fraud and Abuse Act or CFAA, and it's probably the oldest computer law we have in America. It was passed in 1986, literally after Ronald Reagan saw Matthew Broderick in the movie WarGames and had a panic, and passed this very expansive cybersecurity law that defined exceeding your authorization on a computer as an offense. And so the way that Facebook has interpreted that law as meaning that your authorization is whatever the terms of service let you do. And so if the Terms of Service says that you can do A, B, and C, but not D, then if you do D, you commit a potential felony. And many federal prosecutors embrace this theory. They brought 13 charges against a friend of mine named Aaron Swartz, who is one of the founders of Reddit, for violating Terms of Service to automatically download academic articles that he was allowed to download, but that he was supposed to click on individually instead of using a script to download. He was facing 35 years in prison when he hanged himself in 2013. And that law was substantially narrowed this year when the Supreme Court ruled in a case called Van Buren. They said that that's not really how it should be interpreted, that what they meant by exceeding your authorization is if there's a computer you're not allowed to access and you access it, that that exceeds your authorization. But if you're on a computer where you're allowed to click three of the buttons but not the fourth one, clicking the fourth one doesn't make you a felon. They may be going after more kind of exotic legal theories here than the Computer Fraud and Abuse Act, because as I say, that's been substantially narrowed. A favorite one is something called tortious interference with contract, which means that like, I have a contract with Facebook to use it in a certain way. And when you make a tool that lets me use it in a different way, you interfere in my contract with Facebook, and Facebook says that it has standing to come after you. British law tends to be more deferential towards corporations, so they may be thinking of a tortious interference theory under British law or cybersecurity claim under British law as well. We've even seen, thankfully, mostly unsuccessful claims under copyright law, where you have firms that say, the intended rendering of this webpage is a copyrighted work. And when you alter the rendering of the webpage, you make a derivative work without permission. It's a very silly theory because it would mean that ad blocking is a copyright infringement. But so is increasing the font, or switching to night mode, or not loading the images in order to save bandwidth, or in theory, making your window really wide so it breaks the word wrap. But companies often try that. And in the United Kingdom, Facebook, if they prevail, will be entitled to recoup their legal fees. And so what this guy said was like, “Look, they can send me into the ground on the case, dragging it out for as long as they need to. And then if I am unfortunate enough to lose, I have to pay their $1,000 an hour lawyers for all the time that they spent dragging out the case. And so I'm just not even going to risk it.”

SPENCER: Yeah, it seems such asymmetric warfare. Even if you're gonna win the lawsuit, it's gonna be so costly that you don't want to go through that.

CORY: Sure, yeah.

SPENCER: So what would you say to someone who makes the argument that these companies create software — you don't have to use that software — but because they made it, they have the right to dictate how it's used. And if you're not cool with that, just don't use it.

CORY: Yeah. There's lots of ways to address that. [laughs] I like the one that says, “You offered me the software, I used it. If you didn't want people to modify it, no one told you to make it.” That argument cuts both ways. People have been changing how the software runs on their own computer for as long as there has been software. That was the rules of the game when you started. You knew that those were the rules. Why did you make software if you didn't want people to customize it so it serves them better? But, there's another argument here, which is that as a technical matter, I don't expect you necessarily to cooperate with me when I do this adversarial interoperability. You might try and alter your design to stop me from doing it. I mean, you could imagine that they might try to prevent people from automating unfollowing from each item by having the ‘unfollow, are you sure' dialog appear in a different place on the screen every time, so there's a randomizer so it'd be harder to script or something. You can get into guerrilla warfare with me. But why is it the court's business to come in and put its thumb on the scales when you and I are disagreeing with one another?

SPENCER: Do you think it's unethical for companies to use this kind of guerrilla warfare tactics to try to reduce the ability of their users to make modifications?

CORY: I think it depends on the modification. And I think you're getting at something really important here, which is that modifying the user experience could, in fact, do things to harm you as the person who's making the modification or harm the service in ways that go beyond simply damaging the shareholders interest by making things better for users. If you think that there is an equilibrium between the shareholders interests in the users interests, you're just shifting that equilibrium a little bit towards the user, and could also harm third parties. You might make a tool that doesn't just export your own data but exports your friend's data and exposes them to privacy risks. And the thing that I think we can take away from the Facebook experience with Unfollow Everything (and with some of the other services they've attacked), a good example would be Ad Observatory, which is an NYU engineering school project that recruits Facebook volunteers to run a plugin that grabs any political ads they see and puts them in a public repository, so that independent researchers can validate whether Facebook is enforcing its own rules about political advertising and disinformation, which it will not surprise you to learn they are not doing, either because the ads are worth more money to them than they're willing to give up. And so they don't enforce the rules very vigorously, or because the ads are worth so little money that they wouldn't want to spend a bunch of money enforcing their rules. One way or the other, they're not doing what they tell people they're doing. Facebook has threatened to sue these guys under the Computer Fraud and Abuse Act and under tortious interference theories. And they make all kinds of arguments about why they need to do it. They said, “Well, maybe you guys are grabbing user data.” And the plugin is free open-source software, and Mozilla Labs did a full teardown of it. And they're not. Just as a factual matter, it's not happening. You can also use a little protocol analyzer and see whether or not they're doing it. It's just wrong, like, you are entitled to your own opinions, but not entitled to your own facts. But Facebook went after them, and they said, “We have to do this to defend user privacy. After all, nobody wants another Cambridge Analytica.” And nobody wants another Cambridge Analytica. That part is absolutely true. The question is whether Facebook should be the one who gets to decide what we do to prevent another Cambridge Analytica. And I would argue that they shouldn't be, for a couple of reasons. The first obvious one is that they're very conflicted. Sometimes Facebook defends its users' privacy, and they really do defend their users' privacy. A lot of the time, they stop identity thieves and other people from just harvesting user data willy-nilly. And they do that because they have the same interest as users at times. And then there are other times when Facebook's interests are against the users' interests. When they tilt towards the shareholders, and the shareholders and the users have antithetical interests. And in those cases, the users lose. So Facebook has this unresolvable conflict of interest that disqualifies it from being the arbiter of when it's okay to mod something and when it's not. And then there's another, I guess, more practical reason that we shouldn't let Facebook be the arbiter of what things lead to Cambridge Analytica situations, which is that Facebook disqualified itself from that when they let the original Cambridge Analytica situation happen. If Facebook's argument is, “We're going to prevent the next Cambridge Analytica,” I think we have the right to ask, “Well, why should we trust you when you permitted the first one?” And so this raises this question, if we're going to not allow Facebook to decide what mods are and aren't legitimate, then who is going to make that call? Someone needs to make that call. We don't want someone coming along and modifying Facebook in ways that expose millions of users to identity theft, or exfiltrate their private messages, or do other things that are disfavored and dangerous and bad. And, I think, the right answer is that rather than having those choices made by corporate fiat, they should be made by democratically accountable lawmakers, that if we're worried about privacy, rather than inviting Facebook to abuse, cybersecurity, and copyright law to defend its users' privacy, we should have a privacy law. And that's the thing that Facebook has actually gone to enormous lengths to prevent. But if we had a free standing federal privacy law, with a private right of action, so that you or I could sue anyone who violated our privacy rights, something that America is really sorely lacking, then not only could we tell whether an interoperator plugging something new into Facebook was doing something bad for privacy — because we could tell because they'd be violating privacy law — but we could also tell whether Facebook was doing something bad for privacy. The advantage of that law is that it would bind all parties. It would create this level playing field where Facebook and its rivals would all have to use that as the floor under which they could not slide, and that would determine what good privacy and bad privacy were. I think that's a much better answer than trying to make Mark Zuckerberg a better Pope Emperor for three billion people. Fixing the internet is much more important than fixing the platforms. And fixing the internet means taking power away from giant companies. And while fixing the platforms means perfecting them, and I don't think we can perfect them. And in fact, it's not really my job to perfect them. It's their company. They can do what they want to figure out how to perfect it or not. But, it's our internet. And so, we should have the right to take such steps as to improve our internet.


SPENCER: How do you model the behavior of companies, because I like to think of there being two types of companies, ones that are founder led where the founder has a strong personality, and basically, they're still exerting a lot of control so the company could do things that are aligned with the founder's vision. And then other types of large companies that (basically you can think of them as just profit-maximizing agents) are generally imperfect. They're not not perfectly rational, but they're attempting to maximize profit in a very sociopathic way because that's all they care about — and somebody that only cares about one thing generally is quite sociopathic — and I'm just wondering, how does that relate to how you think about companies?

CORY: That's a very good question. I think that you're right, that just as a formal matter, there are a handful of very large companies whose stock structure is such that even though they're publicly listed and have a diversity of investors, they act according to the whims of a founder, who has the controlling vote. So News Corp is one of them and Google, sort of is one of them, and Facebook is one of them, because of this dual share structure. And they do behave a little more idiosyncratically, you might even say less professionally, in the sense that they behave according to the whims of unaccountable leader instead of being disciplined by the fear that the board will be ousted by the shareholders who will put in a new board that will put in a new CEO that will make them do something.

SPENCER: Right. It's clear Google is not trying to maximize profit. I think that's pretty obvious.

CORY: Well, I don't know about that. Why would you say that?

SPENCER: Oh, well, they sit on just absolutely massive amounts of cash. And they do huge numbers of projects that seem — well, it seemed to me, anyway — they're not profit maximizing.

CORY: I would completely dispute that characterization of how Google works. So Google, as a share of its revenues, spends almost nothing on R&D. I don't know if you ever followed Intellectual Ventures, the Nathan Myhrvold patent troll outfit?

SPENCER: Yeah, they would make just thousands of patents, right?

CORY: Well, they would just buy patents from failed startups and then just sue people. And they had a lab, and the lab made stuff. And their argument was, “We're not a lawsuit company. We're an Idea Factory. Look at all the cool things our labs made.” And their labs never made products that were ever commercialized, and they were a rounding error on the total revenue. What they were was a lawsuit factory. What Google is, is a company that has monopolized a large section of the ad market, and which — according to the Texas Attorney General documents that were pried loose from Google and Facebook — also price fixes the ad market and extracts about half the advertising dollars spent in the world. And then it has this tiny little R&D spend. And mostly what it does is buy vertically adjacent firms to its own line of business in order to consolidate its ability to extract monopoly rents from advertisers and starve its supply chain of publishers from a share of those monopoly rents. And everything else is just window dressing. And they're incredibly good at it. I mean, advertising is more expensive and publishers see less of it. And advertisers spend more for it than they ever have. And almost all of it goes to one company. And the stuff that doesn't, go to Facebook, whom they market rig with. And everything else is in service to that. They're super good at it.

SPENCER: I don't know, because Alphabet, as I understand it, spends about 27 billion a year in R&D. Are you saying that that's a rounding error?

CORY: Yeah. They are a trillion dollar company, and that R&D includes a bunch of stuff that is not new products. It's just refining ad tech and adjacent stuff, like server management and whatnot. So when you add up all of the capital Google owns, and you add up all of its annual expenditure on R&D, and you multiply its revenues by whatever multiplier you want to use to calculate the market cap, you end up with a giant gap between its market cap and its actual value as a company on paper. And that gap is called its intangibles. And normally we would attribute those intangibles to goodwill or name recognition or something. But even for companies that are extremely well known, and whose logos are licensed and appear in lots of places, it's much larger than that. And the thing that that gap looks like as a share of the company's net worth is the market cap gap that we see for monopolies. It's the investor consensus on how much monopoly rent Google can extract because it has no effective competitors.

SPENCER: I'm not sure if I agree with that way of analyzing it. My understanding is they spent 27B on R&D and the quarterly profit was about 18.5B. I understand that if you value the entire company, that makes 27B looks like a rounding error. But in terms of relative to revenue, it doesn't seem like such a rounding error.

CORY: Basically, a trillion dollar valuation-$20B R&D. Compare it to Lockheed or IBM or AT&T a generation or two ago, and it's a much smaller share. And then the other thing to note is that their R&D doesn't produce anything. So, Google has made one and a half successful in-house products. They made a search engine and a Hotmail clone. Everything else that they've done that's successful, they bought from someone else. Their whole ad tech stack, their mobile platform, most of their server management tools, and so on, and everything that they made in-house that wasn't thought, that one and a half products with the exception of Google Photos, which runs on a platform that they bought, and they just sort of pushed to every one of those platforms, everything else that they designed crashed and burned. So they're a company that doesn't do R&D seriously, or is very bad at it. One or the other.

SPENCER: Yeah. To your point, it's fascinating that they've had like hundreds of projects. There's some website that catalogs all these dead projects from Google. And if you look at their big successes, almost all of them were purchased. A lot of people, I think, don't even realize that they think they made them.

CORY: There's only one and a half big successes that they didn't purchase; it's Gmail and search.

SPENCER: Yeah, they bought Android, right?

CORY: Yeah, they bought Android. They bought their whole ad tech stack. And they'll say, “Oh, well, we professionalized it.” That's the D in R&D, I suppose. “We professionalized that. We stabilized it.” I remember before [Evan Williams] sold Blogger to Google, it was like running on these empty servers that fell over every day, and they rewrote the code base, and so on. But that's development. That's just scaling up. By definition, every monopolist is good at scaling up. That's what makes them a monopolist.

SPENCER: So going back to interoperability, I'm wondering how much of your view on this is based on the kind of thinking about what's best for society, saying, “We're best off in terms of the total effects on society, if we are allowed to kind of modify the software that we buy.” And how much of it is based on principle, where you feel like, in principle, if we buy something then we have the right to be able to modify it?

CORY: The principle is important to me, but that's not what animates my advocacy for interoperability here. When we talk about tech giants and their growth and scale, people tend to put a lot of focus on the network effects that make them big. So, you joined Facebook because your friends were there. And then other people joined Facebook because you were there. And that virtuous cycle is what fuels Facebook's growth. But it's not what keeps it big. The thing that keeps Facebook big is the high switching costs. It's what you have to give up if you want to quit Facebook and go somewhere else. Because the collective action cost, the cost of getting all of your friends to quit Facebook with you and go somewhere else, is transcendentally high. That means that if you leave Facebook, you will probably leave your friends behind, too. And there's nothing intrinsic to that. There's no reason that the switching costs couldn't be very low. Facebook, for example, when it started the way that it defeated the switching cost of leaving MySpace to go to Facebook was it made a little bot that would go and scrape your waiting MySpace messages and put them in your Facebook inbox. And then when you reply to them, it would push them back out to your MySpace hellbox.

SPENCER: Sounds like a Terms of Service violation.

CORY: Yeah, sure. But you know, every pirate wants to be an admiral. When I do it, it's progress. When you do it, it's theft. And same with Apple, right? Apple exists because they were nearly driven to extinction because there weren't compatible versions of Office. Microsoft wouldn't update Office for the Mac; it was completely cursed. You couldn't open current Office for Windows files. If you save the files, no one who uses Windows could open your files. Steve Jobs just paid some engineers to reverse engineer Word and Excel and PowerPoint, made Keynote, Pages and Numbers. And then, Microsoft, having lost the advantage that they gained through high switching costs and still incurring the expense of maintaining those high switching costs because they had all these incompatible versions of Word on their own that they had to try and manage, that's when they standardized the Office file formats. So we got docx and xlsx and pptx that are now standards defined. And you can open them with a whole bunch of different Office programs, from LibreOffice to Google Docs. And so, if we can reduce the switching cost, for example, there's a group of people we've worked with who are breast cancer providers. So there are women who carry the BRCA gene, and that means that on the one hand, they are at risk of developing breast cancer, but also because it's hereditary, their daughters and their aunts and their mothers and grandmothers are often very sick or dying or dead as a result of this. And Facebook very aggressively courted them to come on the platform. And they did. And they grew quite large there, because it's a big platform and the discoverability matters. And they discovered that Facebook was very bad for their privacy. One of the founders of the group, who's not a computer professional, was just noodling around one day, and she realized that she could enumerate the membership of any Facebook group, even if she wasn't a member of it. And this really alarmed her because it exposed her community to a risk. And she brought it to Facebook. And Facebook said that that is not a bug report; it's a feature request. And it's not one they were prepared to fill. So they won't fix it. She kept pushing, and eventually, they said, “Fine, fine. We'll modify it a little. We will patch it so that if you're a member of a group, you can enumerate its membership. And she was like, “That's still bad for us.” And they're like, “Yeah, that's all we're prepared to do.” And they're stuck, because the collective action problem of leaving is so high. But imagine if they could just stand up a Diaspora instance, or just any other message board service, and they could use bots and scrapers to pull messages down from the group and push messages back to the group, and they could add a little footer that said, “Today, 26% of the message traffic in this group originated off Facebook. Once that reach reaches 60%, we're going to wait 21 days, and then we're going to sever the link. Click here to find out how to leave Facebook and join us somewhere else.” And they could lower the switching costs and liberate themselves from this. And that's how we got that incredible dynamism that was one so characteristic of the internet. There was a time when if you were a couple of people in a garage, you could buy a ROM from Phoenix, and you could clone the IBM PC, and you could start Compaq or Dell or Gateway and make compatible programs. Or before that, you could make plug compatible mainframes. The whole history of technology is just people bootstrapping new firms and new services and new ideas, not by pretending that everything that happened never existed or asking people to abandon everything they used and jump to something new, but by enveloping it, by encompassing it with interoperable products. So when the web was getting started, it had a huge problem, which is that everything on the internet that normal people used was already organized through a thing called Gopher, which was a menu driven service. And so they just made web browsers able to read Gopher pages and interact and access them. And they subsumed Gopher into the web. If you were to try and do that with Facebook, if you're trying to wrap Facebook in another product, or if you were to try to take Apple and do to it what it did to Microsoft by reverse engineering the iTunes file formats and making compatible iTunes players and readers and writers, they would reduce you to rubble because they have used their monopoly rents, their excessive profits, and their power to either pass new laws or create new interpretations of existing laws that allowed them to prevent anyone from doing unto them what they did unto all the people who came before them. So they're not helpless like the people who made Gopher. The way that gopher came into existence is there were all these services that you could only access over a terminal using arcane commands. And they just literally wrote scripts that would turn those into menu driven systems without permission from the people who made them, oftentimes to their great ire because you'd have like a library database that got three queries a day from the three people who understood its abstruse syntax, and all of a sudden it's part of a global menu driven network that anyone who can figure out menus can use. And they were like, “What are you doing? Why have you done this to my poor service?” Gopher couldn't stop other people, the web, from doing what it did to all those command line services. But the people who run the web today can stop anyone from ever doing that to them.


SPENCER: What do you think of blockchain as a solution for this kind of social media lock in? Some people have argued that it'd be better if social media was written to the blockchain. And then you could have lots of different clients that could all pull it down that could enforce different restrictions, like some interfaces could just pull in stuff that's shorter than 280 characters and other interfaces could allow long posts or images or videos, but basically serve as one source of truth for what you post and then just lots of different ways to interact with that.

CORY: I don't know what a uniform resource locator doesn't get you that a blockchain would, except that it doesn't burn an entire mountain of coal every time you try to figure out where the post lives.

SPENCER: Yeah, fair enough. Well, I guess, the main thing is that you don't have to trust the source that hosts it, right?

CORY: Yeah, I guess. I don't think that's the problem. The problem isn't that people can figure out how to put files on a server. The problem is that the system is controlled through these choke points that get to decide whether or not the file that's sitting on a server can be accessed from one place or another. As an example, I run my own mail host. And I have done now for 20 years. And I say I run it, but actually my systems administrator, Ken Snider, who's the brilliant network administrator and used to be the CTO of Wikimedia runs it for me on a high availability server TorIX, which is the main internet exchange point in Toronto, Canada. So, it's a giant well-provisioned data center. The mail that comes from the server looks like good internet mail. And the server has one user, and it's me. But I'm currently blocked from Comcast; I was just blocked by Gmail. I spent three years being blocked by AT&T because of their black hole lists. Now, putting that mail on the blockchain wouldn't help, right? The problem is that you have a couple of firms that act as choke points to digital infrastructure that have really reduced the web. As Tom Eastman says, “five giant websites filled with screenshots of text from the other four,” and the backend infrastructure of stuff that's invisible to those services is irrelevant to whether it becomes visible to those services.

SPENCER: Well, I get your point about the energy wasting of blockchain, but doesn't it solve the trust issue, which is that if it's all in the blockchain, you don't actually have to trust anyone who is in control of the content, right? It's true that anyone can set up a server and put content on it, but then do you trust that person to not add restrictions or try to prevent you from using it the way you want? I mean, doesn't that actually avoid a major issue?

CORY: Well, I think you're confusing where the restrictions would be problematic. I mean, presumably, if you stop content on the blockchain, at least some of it will be sensitive and will be encrypted and would have only some people are going to have keys to it. We want users to be able to restrict who can see their content. If you're setting up your own server or instantiating it on someone's cloud infrastructure in, say, a container that can be easily moved from one cloud to another, and if there are lots of places you can move it from and to, if it's not all monopolized with four different cloud hosts or whatever, then that part's fine. I don't think the reason that people can't access my mailing list is because they don't trust me. They've actually asked to subscribe to my mailing list. They do trust me. And so the fact that I'm hosting it on my own server, the impediment to them receiving it is not that they don't trust me, the impediment is that all of their service comes from a handful of monopolists who are careless in the way that they approach this stuff. So, one of the reasons that it matters that I got blackholed by AT&T for three years is that AT&T bought almost all of the ISPs in America, which meant that anyone who's had an ISP-supplied mailbox just couldn't get mail from me. And they didn't know it. And AT&T is so big, and NACA's so confused because it merged so many different services that they both don't care to and probably can't manage that blacklist well. And if the mail that I sent to AT&T subscribers was hosted on the blockchain instead, it wouldn't improve the situation at all, at least as far as I can tell.

SPENCER: Yeah. I think there's different problems that can be solved. If you're talking about content that's public, like Twitter content, then if it sits on a blockchain then there'd be lots of different people who consume it, and nobody can stop it from being consumed in any which way. So, that is a solution. But I agree with you that for your use case, yeah,

CORY: Then you have to organize ‘international everybody quits Twitter and starts putting things on blockchain' day, which I think we should at least stipulate as extremely implausible. And I'm not sure why we couldn't just have federated protocols. If we think that we can get everyone to do that, why don't we think that we can just get everyone to join Federated Services instead?

SPENCER: That makes sense. So how would they work? Is the idea that there's essentially a uniform interface that they have to communicate through?

CORY: Yeah, they adhere to a standard. We have those. In fact, we have a really good one [chuckles], which is the original Twitter API, because it was designed to be federated. And it's basically what Mastodon servers use. And we have Mastodon. The only thing we don't have is high quality interoperability with Twitter, because Twitter has decided to block that. And they may voluntarily change their posture towards the fediverse. They have this blue sky project that has as its goal to create what they call an app store for content moderation algorithms. So you can choose different ways of viewing the content. And presumably, you could federate it with other sources. Or we might get interoperability with Twitter through something like the Access Act, which is legislation pending in Congress, that requires firms of a certain size to expose APIs to interoperators — who are bound by certain privacy rules; they're not allowed to commercialize the data that they receive over these API Interfaces — and allow them to interoperate with them. And each of those APIs will be designed by a joint committee of representatives from a dominant firm, some of its smaller competitors, representatives from NIST (the National Institute for Standards in Technology), and a couple of public interest reps or academics. So, that's another way we might get it, right? A kind of “shotgun wedding”. Or, we might get it by creating a defense for interoperators. So we might say, as a matter of law, if you do something that might be a copyright or patent or Terms of Service infringement, that if you can show that you were doing bonafide interoperability that doesn't violate any other law — privacy law, deceptive practices law, or what have you — that you have a defense. And then maybe we'd get funders who would come in and fund smaller firms to inter-connect with Twitter without its permission, or Facebook, or whatever, or merge them both, have a single interface for Twitter, Facebook, and LinkedIn. And all of those are plausible routes to it. You might also see something like the FTC settling its claims with Facebook. And as part of the settlement, appointing a special master, who says to Facebook, “Alright, well, of course, we want you to be able to defend your patents and your copyrights and not to have people hack your server and so on. But the ‘special master' is your adult supervision. And anytime you're going to send a legal threat to someone, you have to clear it with a ‘special master' who's going to make sure that you're not trying to shut down interoperability, that you're actually defending your users and not your shareholders. Those are all plausible ways to do it. Another interesting way might be to have procurement guidelines altered. So, one of the first ways that we got widespread interoperability was by the US government refusing to buy proprietary tools, specifically rifles in the Civil War. They told rifle manufacturers like, “We understand that your shareholders are very important to you, but us having guns that have replacement parts and ammunition is very important to us. And we will not buy guns from you, unless all the parts are interoperable, and the ammo is interoperable.” And today, the USG could do that at the stroke of a pen, and probably should. The fact that we have a bunch of unified school districts across the country that procured Google Classroom at enormous expense without securing an assurance from Google that it wouldn't attack contractors that they hired to modify Google Classroom to be better suited to their pedagogical needs is really a matter of gross negligence. How could you claim to be spending taxpayer money wisely, if that's how you spent it? And that's just a drop in the bucket compared to the national spending. So in aerospace, for example, the idea of having interoperable parts long ago went by the wayside, and there are a lot of aerospace components that have single source suppliers. And private equity funds figured out that this was an enormous opportunity. And most of those single source suppliers have been rolled up by private equity funds that have paradoxically slashed the price to primary military contractors like Boeing, so that Boeing pays almost nothing to get these parts—well below cost, which guarantees that everything that Boeing builds for the US military has lots and lots of these parts. And the replacement parts for when they were out cost tens of thousands of percent more than they do to manufacture. And so, basically, these are just ticking time bombs. And really, the US government at all levels in every department should be procuring for interoperability and mandating interoperability and its procurement.

SPENCER: That's really interesting. That seems like there's a duty to the American people to not have vendor lock-in when it's the US government buying something, right?

CORY: Yeah, and that's a principle that's come up a lot overseas. In Germany, there have been various efforts of varying degrees of success to mandate that city governments or state governments run on free and open source software, not merely for security or transparency, but also to have multi-vendor solutions.

SPENCER: So changing topics slightly, do you think that tech companies should be broken up?

CORY: Mmm, yeah...the answer is often, but that is hard, and the devil is in the details. So breakups are awesome, but they take a long time. And if I were gonna go through a hierarchy of breakups, I would say, let's start with the firms that obtain mergers under false pretenses. So Facebook, for example, assured regulators that it would never merge the backends of WhatsApp, Instagram and Facebook. And it did. So I think that if you secure regulatory exemption to effect a merger, and you do so by lying to the regulator, that the automatic penalty should not be a fine, it should be unruling the merger. And, Facebook, I think, understands that this is a plausible outcome. And so Nick Clegg — who's the former Deputy Prime Minister of the United Kingdom, who now is Facebook's Head of International Relations and is paid millions of pounds a year to go around the world and defend Facebook — said that it would be terrible to break up Facebook and WhatsApp and Instagram, even though they lied to get this merger, because it would be very expensive for Facebook. And I think that is a really good reason to break up Facebook, Instagram and WhatsApp so that they'll learn. If it's cheaper to break the rules than to follow them, then you invite more rule breaking. As Camus said, “Sometimes you execute an admiral to encourage the others.” And so I would start with those. I think that there are just a lot of anti-competitive mergers as well, but should be revisited and unwound. And I think the test should really be if you see that a company is behaving badly, is harming us, because it is big, if it's bigness is what harms us — so for example, with Facebook, you can trace, I think, all or at least the vast majority of the harms that Facebook visits upon us to its scale and its desire to scale. When you look at the whistleblower testimony, you see that desire to scale is why whenever there's a juncture where Facebook's interests conflict with the public interest that goes for its shareholders interests, and it's always about scaling. So if the fact that Facebook has attained scale is what makes it harmful, then the remedy should be to reduce that scale. And in particular, with Facebook, I think there's a great case to be made for it because we've tried everything else. Facebook paid the largest fine in American history, and it didn't change their behavior. They just did it again. And so, if they have fines and consent decrees and all kinds of stuff that's supposed to moderate their conduct, and none of it does, then you've got to go thermonuclear. Now, the problem with breakups is they take a long time, and they're not always successful. So AT&T, ultimately, took something like 70 years to break up. IBM was in antitrust hell for 12 years from 1969 to 1981. Every year for 12 years, it spent more on antitrust lawyers than the entire Department of Justice spent on antitrust lawyers for all of its cases. And after 12 years, it wriggled off the hook. But even though that case was ultimately unsuccessful, and it was a very long, hard slog, I think that we can count it as a victory. And the reason is that it moderated IBM's conduct and it moderated the conduct of the entire sector. The entire sector looked at what was happening to IBM, and they said, “Whatever it was that caused IBM to land in this awful position, we're not going to do it.” And when IBM got away from it, they were like, “We're not going to do that again, either, at least not for quite a long time.” So one of the first projects that IBM undertook after the antitrust suit was that it made the IBM PC. And it knew that the Department of Justice was very angry because IBM had been using hardware software tying as a means of extracting very high rents to monopolize markets and create vertical monopolies. And so they said, “Whatever we do, we are not going to make an operating system for this PC.” And so they found a couple of weird looking kids named Bill Gates and Paul Allen. And they hired them and their new company Micro-Soft to make an operating system for them called DOS. And that's where we got Microsoft. And when Phoenix came along and cloned their ROMs, they were like, “We don't want to stop people from making plug-compatible PCs because our war on plug-compatible mainframes was one of the reasons that the DOJ went after us. And so we got Gateway, Compaq, and Dell, and all of those other PC companies because of the antitrust. And then the same thing happened again. Along came Microsoft's antitrust scrutiny and seven years in ‘antitrust hell'. And again, IBM wriggled off the hook. But when a couple of Stanford grad students named Larry and Sergey started a company called Google, Microsoft did not do to them what they did to Netscape. And according to all the people who were in the boardrooms when that was happening, the reason for that was that nobody wanted to get back into antitrust hell. And in particular, Bill Gates (talk about a company that took its cues from its founder), Bill Gates had been personally humiliated during the antitrust suit against Microsoft. I would urge everyone listening to this to go to YouTube and watch Bill Gates's deposition. It is the most painful thing you've ever seen. And it went viral on VHS; people used to swap cassettes of Bill Gates just losing his marbles during this deposition. And nobody wanted to see Bill Gates back on the stand again. It was so humiliating to him. And Bill Gates, himself, has tacitly admitted that this is what caused Microsoft to moderate its conduct. Back in 2019, Kara Swisher asked him, “Why did Microsoft not buy Android when it was for sale?” And Gates said, “Oh, we were distracted by the antitrust case.” But the Android sale was seven years after the antitrust case ended. And what he really meant was, ‘the antitrust suit took the wind out of our sails. It traumatized us so thoroughly, that even seven years later, nakedly anti-competitive vertical mergers were not something that we were willing to do'. And so I think that breakups and attempted breakups are absolutely worth every penny we spend on him.

SPENCER: Cory, this is really interesting. And I could happily ask you questions for another hour, but I know you have to go. So thank you so much for coming on.

CORY: Oh, it was absolutely my pleasure.


JOSH: A listener asks: What are your non-work related hobbies?

SPENCER: So in terms of a athletics, I enjoy doing bouldering, which I'm very amateur at but I think it's fun. I also do mixed martial arts once a week, which I really enjoy. And very injured, but I think it's a lot of fun. Beyond that, I enjoy running studies, which I know is a weird hobby, and it crosses over into my work, but I really enjoy creating and running studies in psychology. I really enjoyed writing. I tried to write an essay every maybe two to three weeks. I've written over 200 essays now. Again, that kind of blends in with my work, but it's something I really enjoy and I view as a hobby.




Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: