CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 174: Systems of governance built on prediction markets (with Robin Hanson)

September 8, 2023

What is futarchy? Why does it seem to be easier to find social innovations rather than technical innovations? How does it differ from democracy? In what ways might a futarchy be gamed? What are some obstacles to implementing futarchy? Do we actually like for our politicians to be hypocritical to some degree? How mistaken are we about our own goals for social, political, and economic institutions? Do we enjoy fighting (politically) more than actually governing well and improving life for everyone? What makes something "sacred"? What is a tax career agent?

Robin Hanson is associate professor of economics at George Mason University and research associate at the Future of Humanity Institute of Oxford University. He has a doctorate in social science from California Institute of Technology, master's degrees in physics and philosophy from the University of Chicago, and nine years experience as a research programmer at Lockheed and NASA. He has over ninety academic publications in major journals across a wide variety of fields and has written two books: The Age of Em: Work, Love and Life When Robots Rule the Earth (2016), and The Elephant in the Brain: Hidden Motives in Everyday Life (2018, co-authored with Kevin Simler). He has pioneered prediction markets, also known as information markets and idea futures, since 1988; and he suggests "futarchy" as a form of governance based on prediction markets. He also coined the phrase "The Great Filter" and has recently numerically estimated it via a model of "Grabby Aliens". Learn more about Robin at his GMU page or follow him on the-website-formerly-known-as-Twitter at @robinhanson.

SPENCER: Robin, welcome.

ROBIN: Hello, Spencer, long time no see.

SPENCER: I think it's not an underestimate to say that you're one of the most innovative thinkers, you keep coming up with new, interesting ideas. And yet, people don't always want to implement your ideas. [chuckles]

ROBIN: That's true. [chuckles]

SPENCER: Sometimes your proposals are maybe too innovative. Some would say maybe bordering on unreasonable. But I think they're very, very interesting. So why don't we start there? Can you give us a quick introduction to how you think about the world and we'll start maybe talking about futarchy, one of your interesting proposals.

ROBIN: I'm an economist, but I didn't start there. I started long ago in engineering, then I switched to physics, then I did computer science for nine years (AI research). And then I finally went back to school in order to pursue my interest in alternative institutions. And I might say that, earlier in life, I was thinking technical innovations would be the key. And I realized, at some point, often technical things are blocked by social things. And then I turned my attention more to how we can do social things. And then I seem to see a lot of really big improvements possible there. It seemed much easier to find those than it was in computer science or physics. And that really excited me, so I switched into social science. I got a PhD in social science. I initially intended to do lab experiments to testify institution ideas, but then was told, "You're not supposed to do experiments on things you don't have theories of." That was the rule there, and so I did more theories. But over the long run, I've pursued a lot of institution ideas, most of which don't really need that much theory, honestly. And I had disappointing success in getting people to engage them farther than nodding and agreeing that it sounds interesting. So that's my context. So maybe, after we discuss some ideas we could come back to what's the problem.

SPENCER: Yeah, that's a great intro. So you mentioned that it's easier for you to find ideas in the kind of the social realm that are innovative than in this sort of more technical realm where you started. Why do you think that is?

ROBIN: Well, unfortunately, I think what that was telling me even right then, is that social innovations are often not adopted, so they are repeatedly discovered. In a world where the first time somebody thinks of an idea, it gets implemented and adopted, then it gets pulled out of the pool of new ideas. But in a world where every time somebody thinks of a new idea, it doesn't get adopted, then lots of people can end up thinking of the same idea. And then there's less energy going in that direction, as well.

SPENCER: So let's jump into a concrete idea you've had: futarchy. Do you want to just introduce us to the topic and also tell us why you think it's a good idea?

ROBIN: So in my mind, this is my best idea. Because it takes on the biggest question. So our world is full of people making decisions at all sorts of levels. And then we often make institutional decisions about how to structure institutions or even which institutions to adopt. And at all levels, including that most general level, fundamentally, what we need to do in a decision is to aggregate both information and values. We need to figure out what our best model of the truth is relevant to that situation, and figure out how our values weigh on that to decide what to do. And if you look up sort of our best institutions, and ask what's wrong with them, I came to decide that the thing that most often goes wrong in our institutions, even at the highest level, is that we don't aggregate information as well. That is, different people know different things. They disagree with each other. And it's just not clear whether they're doing their best job of getting everything together to make that decision. So that seemed to me the most important institution question to address. If you could make a better institution for making decisions in general, and attack the problem of information aggregation, then you could not only help you decide who to marry or your company decide which product to make, but we could decide big, national international issues: how to restructure crime, how to organize the military, how to insure against existential risk. All of those are decisions we make, and we arguably do them badly, because we don't aggregate information well.

SPENCER: Would you say that democracy, for example, is a kind of information processing system that is sort of a competitor with futarchy?

ROBIN: Well, democracy is a way that we make decisions at the highest level in government. And it is a combination of a value aggregating system and an information aggregating system. And I'd say it does a pretty bad job of information aggregation. And so if I want to reform it, what I want to change is the information aggregation part, but leave the value part alone. So when we get to that, my slogan will be: vote on values, but bet on beliefs. So we're going to keep the similar system that we have now in terms of how to judge values, but we're going to do a new thing for estimating what's true, aggregating information. But we don't want to start with that example, because we want to work up to it.

SPENCER: Great. Okay, so where's the best place to start?

ROBIN: So let me start with firing the CEO. So most for-profit companies have a CEO. And usually the Board of Directors is in charge of keeping the CEO or firing them and the Board is usually friends of the CEO, many of them were put there by the CEO, they don't want to rock the boat, they want to be on lots of boards. And so they arguably don't try very hard, or as hard as they should, to get rid of the CEO, when that looks like a good idea. So we want to make a better mechanism for getting rid of the CEO. That's our task here, okay? On board with me so far?

SPENCER: Yep, got it.

ROBIN: Okay. So what we're going to do is notice that we have an outcome that we are willing to use as a proxy for good decision here, which is the stock price. And we can identify after the fact which decision was made. So we might say, at the end of this quarter, will the CEO still be the CEO or not? Those would be a way to see which decision we made. So the general structure is going to be: anytime we have some discrete options, and we have a measure of what we wanted after the fact, which is the stock price, then we can create what I'm going to call a decision market, where a market advises that decision. And in the case of firing the CEO, the options are: keep the CEO or fire them. And the outcome is the stock price. So now all I have to do is show you how to set up a decision market, i.e. some new markets that will tell you which decision to make in the situation. So let's review an ordinary stock market, you have stock for sale, you can buy or sell if there's a current price. And when you're asking yourself, "Should I buy or sell stock in the market?" What you're supposed to do is think about all the situations that company could be in and ask yourself in each situation, "How much is this company worth in terms of revenue minus costs," and weigh those up across all those options, come up with a number and compare that to the market price. If the number is higher, you should buy. If the number is lower, you should sell. That's how a stock market works. So now, we're going to make two new stock markets, very much like this old stock market, except each is different in the court and way, it's called off if a condition isn't met. This is like called off betting. So in one market, we're going to call off the trades if the CEO stays in power, i.e. is still CEO at the end of the quarter. So these trades will only happen if the CEO leaves. Now, when you're thinking about trading in this market, it's gonna have a different price. And you're going to ask yourself how that price compares to the weighted average you come up with over the different scenarios. But now you're only going to look at the scenarios consistent with this condition that the CEO leaves. And you're going to say, "Across all those scenarios, how much is the company worth?" And you're going to come up with a number and you're going to buy or sell depending on whether you think that number is higher or lower than the current price. One market is estimating what happens if the CEO stays. So those trades are called off if the CEO leaves. The second market is estimating what happens if the CEO leaves. So those trades are called off if the CEO stays. So now we've got two prices: how much the company is worth if the CEO stays, and how much the company is worth if the CEO leaves. And now we can look at the difference of those two prices and see which one's higher. And that's a clear market signal about whether the CEO should stay or leave. In one case, the market could be saying, "Look, this company's worth more without the CEO." And that's a decision market. So that's a way to get market speculators who are just trying to profit from these trades — they don't have any other larger social interest in mind, necessarily — to tell us which decision to make here: do we keep the CEO or do we get rid of them?

SPENCER: Right. And presumably, if some person who's interested in making a profit comes along, and they think one of those markets is undervalued, like they think that the price is wrong, then they would bid the price up to what they thought the price should be, because they could make a profit that way, which would make the market more accurate. So essentially, you're leveraging people's profit seeking interest in order to give you information about whether the company is more profitable if the CEO leaves or stays. So there's sort of like solving a social purpose of understanding how good the CEO is using sort of profit seeking motives.

ROBIN: Right. So it's important to see that these people who traded this market may be employees at the company or are friends with the CEO, and they may have many personal interests involved here. But for the purpose of these trades, they want to buy if the price is too low and sell if the price is too high, so they just want to tell us what they think the most accurate price is, even if they have other interests. And this is a way of harnessing greed — the energy that people are willing to put into speculating on assets in order to tell us things that we want to know — for other purposes. That's the key idea of prediction markets: we harness the greed that people have to profit and trades for the purpose of telling us things we want to know.

SPENCER: Could there be strange confounding effects? Like, let's suppose people thought the only way the CEO was gonna get fired is if some unpredictable thing happens and it makes the company worth a lot less. Could that influence the markets in a strange way?

ROBIN: Yes, it could. So if we're not at the moment of decision about whether the CEO actually leaves or not, if we're say, a month before that moment of decision, we're trying to guess what things could we all learn between now and then that might influence both these prices and whether the CEO stays or leaves. And in that case, we can get (I call them) decision selection bias, where you might set the price of staying hired than leaving, even if this person would be better off leaving. But when we get to the moment of the decision, at the moment when, say, the board is looking at the prices saying should we keep the CEO or not, at that moment, then as long as whatever the Board knows is represented in the market prices, then we don't have this problem of a decision selection bias. Right then, we'll just have their conditional expected values.

SPENCER: Right. I guess if the board had secret information, you could still have a problem, because they might know secret information that would be correlated to whether the CEO stays or goes. But if they were just acting on information that the market knew then you're good; the market is a true estimate of how good the CEO is. And if the market is wrong, someone could profit off of that.

ROBIN: RIght. So it is important that we make sure that whatever the board knows — either they can trade on it or somebody they tell can trade on it — in order to avoid this, one condition is everybody needs to know when the decision is happening.

SPENCER: You mentioned the sort of employees trading and the Board trading. And this might surprise people immediately, because we all know about insider trading. And generally speaking, it's illegal for people with insider knowledge to trade. But you're talking about this as though it's good for insiders to trade. So can you elaborate on this?

ROBIN: Sure. So the major reason we're worried about insider trading is because if you, as an ordinary stock buyer, are expecting insider trading, that's going to make you shy about investing in that stock. And the company might rather people were more eager to invest in the stock and have the price be a little less accurate. So that's the key trade off. Insiders who know things when they trade, they do make the price more accurate. But then other people are afraid of being on the opposite side of that trade, and losing when the other trader wins. And that makes them shy about trading that stock. I think this is a reason to just let the people who founded the company decide whether to allow insider trading in that company. I don't think we need a more general national policy on the question. The founders of the company, I think, internalize the trade off. On the one hand, you could get more trading, on the other hand, you can get a more accurate price. Which one do you want? But what we're talking about here isn't going to make anybody reluctant to buy these stocks. We're talking about the sort of overall value of the company. The problem with insider trading is when insiders are revealing something about the overall value of the company that you don't know, and you're reluctant to trade against them. Here, we're talking about trading on this particular decision, that is you can buy up on one side and down on the other, and not really say anything about the overall value of the company, instead be saying which decision we should make.

SPENCER: You mentioned investors being shy about getting involved when there's insider trading. I think another way to put that is that if you're trading on a stock that there's insider trading in, you're sort of at a competitive disadvantage, in a sense, because you might be looking at the stock and saying, "Well, it looks like it's undervalued according to all the public information." And you might be right. But then insiders might have secret information. And so you think you're getting a good deal. In fact, the insiders are making money. So it sounds like you would actually support a policy where companies get to choose whether insider trading is allowed, but then I guess that would be broadcast. So like, all the investors would know this company allows insider trading, so be careful.

ROBIN: Absolutely. It's also true that when there's value in a price, you can subsidize trading in that price. So we can make automated market makers who lose on average to people with information, and that makes it easier for ignorant people to trade on the stock. They don't suffer as much adverse selection, as we call it. And that's something you could do on these decision markets. That is, you could say, "Well, this is an important decision to make, it's worth a lot to the company to make this decision well, so we are willing to pay extra to get more information on this topic. And the way we pay extra is we subsidize a market maker on those trades. And that means other people, when they come and they see more profit to be made, it might be right."

SPENCER: Another challenge that sometimes is raised for conditional markets is that if they tie up capital, it can be not that profitable to trade in them. Let's say you're trading a conditional market, where maybe three years until it's resolved, and then half of the market is not gonna get resolved at all, because it's gonna get canceled if that thing doesn't come true. So is there a way around that? Or do you see that as a significant problem or not really?

ROBIN: In general, the cost of an informed trade is basically how much you have to put in the market in order to move the price times how long you have to wait until other people see the light, and you can undo your trade for your profit. So it's more expensive to trade on information that will take longer to be revealed. And of course, it's more expensive, but also more profitable, to make a trade in a market that's thicker, more people trading harder to move the price. If you just bet on the company, like for sauce, it actually doesn't cost you any extra to make these bets on whether the CEO should leave or not. So that's a feature of what I call combinatorial trading. If you have a set of topics, it'll cost you something to bet on each one of them. But then it doesn't cost you extra to bet on the combinations of them. If you want to bet on A and not B, that doesn't really cost you extra relative to just betting on A or B separately,

SPENCER: Is it because only one will resolve so you end up having to invest capital...

ROBIN: Yeah, so one state will resolve. And then that will decide all your bets and which ones count.

SPENCER: Okay, so what's the next step to get from this idea of a conditional market to futarchy, the bigger idea?

ROBIN: Let's pause for a moment and say, this mechanism can be applied to lots of things, not just governments. So this is a nice thing that you can try on small scales. So for example, you can have a project with a deadline. And you can have a betting market in what's the chance we'll make the deadline. And then you could ask, "Conditional on making some changes to the project, what's the chance of making the deadline?" So you could change requirements, change personnel, change resources, and you could ask the market, "If we made these changes, what's the new chance of making the deadline?" So you can do that on a very small scale. Most organizations (say) every year hire one or two people in a division of 10 or 20, say. And we could ask for each new hire, "If we hired this person, what will their employee evaluation be in a year or two?" And then we could be looking to hire the people who the market says, "This person will look good, if you hire them after a year or two." The point I'm trying to make is, these are high value decisions. This would be adding a lot of value to those decisions. This is the way in which this mechanism could be providing value to people all the time now and allows small scale tests of the mechanism. So I would want to do these small scale versions and get some experience with that first, before we actually tried to adopt it on a larger scale. But in order to motivate people to be excited by this whole thing, then I do want to describe where we could go with that. What happens if we take this all the way to the biggest levels?

SPENCER: That makes sense. It is fascinating how it's so costly to hire the wrong person. You think that companies would be extremely motivated to find better mechanisms for figuring out who would be good. But of course, you can also imagine, there might be a lot of sensitive issues around predicting a new hire like, it could just be incredibly awkward if you predict someone's going to fail or rooting for someone to do badly, that kind of thing.

ROBIN: Right. And so, a general point I would want to make is just: I can use very general economics and basic reasoning to give you a simple structure that plausibly would produce about a lot of value. But in general, innovation is usually the combination of some simple, elegant idea, plus a lot of messy details that you have to get right in order to make the simple idea work. And innovation in general, at first, somebody has maybe a simple, elegant idea, but then you have to implement it in some context, see what happens and goes wrong, try variations until you find the combination of those details that actually seems sustainable and useful. And that needs to happen with this kind of innovation, too. I can't tell you exactly what the right answer is before a lot of trials are done. I can just tell you, this looks promising enough that you'll probably find some variation that works if you search in a dozen or two combinations.

SPENCER: Okay, so let's now talk about applying this at the biggest level. What does this look like in government?

ROBIN: Okay, so what we needed, in general, was an outcome measure and some discrete decisions. So in government, the discrete decisions are usually in the form of bills offered to be passed. So Congress, for example, repeatedly has bills that come up, and they either vote for them or against them. So now we need an outcome measure, which is, what are we measuring these bills against what we want to achieve? So for the company, the outcome was the stock price. For, say, a city or nation, we don't have a simple price like that. Although, say, if people could buy and sell citizenship, then the current price of citizenship might actually be a good proxy for those decisions. And that would be a measure of how eager people were to come. But in the absence of a good price of citizenship, what you'd have to do is construct some measure like GDP. So GDP is now a measure standardly produced by the Bureau of Labor Statistics. They measure lots of prices, they get lots of surveys, and they try to see the state of the economy and add all the pieces up to some overall measure of economic activity. And scholars often use that as a proxy for just wealth and prosperity of nations. And you might as a first cut, want to adopt policies that will make your nation have a higher GDP. But probably people want to add some other considerations. They want to measure leisure, they want to measure nature, perhaps international respect, in order to produce some more complicated measure of national welfare. And this would be a measure where every year or even month, say, we'd have some number, what was national welfare this month. And then total national welfare would be some weighted average over national welfare each month, across all the months in the future. And what we'd want to do is adopt policies likely to increase national welfare compared to not adopting them. So once we've got some government agency publishing these monthly numbers for national welfare, then we could create an asset that pays out in proportion to national welfare. It's a piece of paper. And if you turn it in, you get so much money, depending on what national welfare was in a given month, say. And now we can use this whole decision market mechanism I described before about this process. So every time there's a bill before the government to consider, we ask the markets: what is national welfare conditional on adopting this bill or conditional on rejecting this bill? And we just accept it if one number is higher than the other.

SPENCER: Right. So it still would be in the realm of the legislator to produce the bills, but they would no longer be choosing what gets passed. Is that right?

ROBIN: Well, they would be choosing the national welfare measure, most importantly. That's the thing we would still need that legislature for. They would vote on bills, say, that decided how many trees counted for, how much population counted for, and other sorts of measures in the overall measure of national welfare. There may be other ways to do it. But my simple proposal in general is to change one thing and hold other things constant so that you can get some sense of what this one change might entail. So I'm going to suggest leaving, how we decide national welfare, to the same mechanism we're using today to decide such things, but leaving the passing of bills now to a new mechanism. So first of all, there's proposing bills, and then there's passing them. So I would actually have proposals now to go to an auction, where say, every day there's two slots for passing a bill. And we have an auction for each slot, and whoever pays the most for that slot gets their bill to be considered in that slot. Now, these markets are set up about that bill. And we decide whether we pass it or not. If we pass a bill, that will be because there's this difference between national welfare if we pass the bill compared to national welfare if we don't pass the bill. And that difference can be a rough estimate of the economic value to us: how many billions of dollars are we gaining by passing this bill? And then I think we could give some fraction of that, maybe 5%, to whoever proposed the bill as their fee for proposing a bill we decided to pass. And then people in the auction would be bidding according to their estimate of the chance that their bill would be passed and how valuable it would be.

SPENCER: So why use an auction to decide what bills are up for the prediction markets? I think a lot of people wouldn't have an immediate reaction saying, "Well, doesn't that just mean that rich people are gonna propose bills?"

ROBIN: Well, rich people don't like to just throw their money away. If you pay a lot of money to have your bill go before this process, but then it isn't passed, you've just lost all your money, you didn't gain anything. So this process will only pass the bills that actually are estimated to improve national welfare. And you can make money if you recommend good bills.

SPENCER: So is your idea that it kind of doesn't matter in a sense where the bills come from, as long as the prediction mechanism is good enough, because whoever is putting them up, as long as they're improving that welfare, they get passed, and therefore it's fine?

ROBIN: Yeah. This is a profit making enterprise. If you can get clever enough to find a good bill, you get rewarded for that here. And that's what we want; we want people looking for good bills to propose that if passed will produce a big value. And we're trying to create an incentive here to do that. Obviously, we could also do this with some other process for passing bills. I have less confidence in that. But it would still be an improvement of the status quo, I'd guess.

SPENCER: But why not just have politicians, as they exist today, just propose a bill? What's wrong with that? Or do you just think that the auction mechanism would produce better bills?

ROBIN: Yeah, I think it would produce better bills. Still, you'd have the assurance that at least they could propose what they wouldn't pass. But people might know of a bill that would improve national welfare. And then the politicians conspire not to let it be considered, because lobbyists say, "No, we don't want that. So don't put that up."

SPENCER: Would it be a head-to-head thing where all the bills up at that moment get pitted against each other and whichever has the highest welfare gets passed? Or does any bill that has a positive number get passed? How would that work?

ROBIN: I'm trying to keep this very simple. So the simplest thing is just one at a time. That's what we do today, really. Also with bills, we just vote on them one at a time. And that works fine here. So let's just do that.

SPENCER: So wouldn't that create a kind of weird incentive, where whoever's paying the bill could stuff it with a bunch of stuff that just benefits themselves personally, as long as it doesn't degrade enough value to make the bill not be net positive?

ROBIN: Okay, so imagine you have a bill that's reform of the parks plus my personal tax. And it turns out that the reform of the parks is valuable enough that even when adding on to the personal tax, this bill passes. So what happens next is if somebody else knows that, they say, "Well, I propose that we throw away the tax, and we leave the first part of the bill."

SPENCER: And that's a new bill. Oh, so you could override previous bills I see.

ROBIN: Right, exactly. So anytime somebody could identify a part of a bill that was a tax, they could just propose a new version of the bill that dropped the tax.

SPENCER: That's an interesting mechanism for sort of enforcing less self-sovereign bills.

ROBIN: Right. So remember, the whole idea here would be that public opinion would still matter a lot for what values we have, how much we care about trees, how much we care about foreign opinions, how much we care about the future, those would all be parameters in the national welfare function. And so politicians would make speeches about those to be elected on the basis of those because that's where they're making their decisions. But then, figuring out what works that would be set to the speculators, they would be in charge of deciding how to do things. And notice that even today, we have a similar split between politicians who sort of represent public opinion and about general goals, and then agency employees who are often expert specialists, who are the ones deciding how to implement agency policy. So we do not let most ordinary people speak to those expert topics, even today. Instead, we hire particular experts to make expert judgments about implementation. So now, we're going to be picking a different set of experts through a better mechanism. But we were already not very democratic about these expert judgment tasks.

SPENCER: I wonder, in your model, why you even need politicians, though. Can you just have people directly say what they care about, like what they value, and then do the math, just have people fill out surveys, and then that goes right into the welfare function?

ROBIN: I think you might. So again, my overall philosophy is when proposing changes, try to make them modular, so people can consider them one at a time, because it really gets hard to consider five different big changes all together at once. So I'm not opposed to more innovation and how we pick this national arrow function. I have some ideas myself we could go into, but proposing the other part of futarchy here is already a really big change. And I think that's a big win just by itself, even without making any other changes.

SPENCER: I suspect, one reason people might be very nervous about proposals like this is gameability. So are there ways that the system can be gamed? We see sort of scandals and finance where it seems like people are gaming the system in different ways. In this case, we're talking about society being gamed. It seems like the stakes are even larger in a way.

ROBIN: So whatever system you have now is also potentially game playable, and is in fact being gamed. So, it's not a criticism of a proposal that it is possible that it would be gamed. The question is just: is it going to be more game level than the other thing you might do instead, or what you have now? So I've gone into great detail about this mechanism considering a great many possible concerns people have about things that could go wrong, and trying to address them in great detail, and I'm happy to spend some time imagining that with you. But I will observe, however, that going into great detail on all those considerations doesn't usually move people very much in terms of whether they support such a thing as that much anyway. So remember, the idea is to do small scale trials, see what goes wrong, and then fix them. So it's not necessary to figure out ahead of time everything that could go wrong. You just have to have a guess that, probably, you can fix them. But anyway, I'm happy to go through a bunch of particular details, as long as we understand that probably won't change your final opinion. But let's just have some examples.

SPENCER: Let's start with just what is the biggest gameability concern that you've heard that you think is the most legitimate concern? And then how would you at least patch that?

ROBIN: Obviously, people could be corrupt in choosing the national welfare function. And you could say you wanted a particular road to be built, you could just add a term to the national welfare function that makes it higher if that road is built. And now the speculators will decide: yeah, building that road will raise national welfare. But the problem was, it's because you were allowing very specific things in the welfare function.

SPENCER: It's like the pork belly kind of welfare functions, right? So with the way people kinda stick this kind of ridiculous clause into some bill that's like, "And we build 14 bridges," or whatever, right? It's like the welfare function is cobbled together with massive special interests.

ROBIN: Right. So you'd want some sort of norm against overly specific national welfare functions. They should be more about general measures of good things rather than very specific choices.

SPENCER: Do you have a preferred way to try to enforce that? Or do you think it would just be up to experimentation?

ROBIN: Now that seems more like when somebody publicizes it, and other people are aghast, and they tsk-tsk about it, and those people don't get reelected, that would be... But even in the worst case, we're not going to do worse than the status quo today because this is what's already happening today.

SPENCER: Well, it certainly is a lot of gaming.

ROBIN: Right. Now, I'll tell you what I think is actually the biggest weakness of this proposal is that it makes hypocrisy harder. And I mean that very seriously. So today, people like to (say) elect politicians who say they care a lot about nature. And they will want to make sure we save nature. And they will say that and we will elect them on that basis. And then they will do some things that on the surface seem like they're doing something in that direction, but not really do that much really. And we could all be fine with that, if we didn't really want that much to be done. We just wanted to seem to be the sort of people who were in favor of that. With something like futarchy, you'll have to be more explicit about what you put into the national welfare function. You put a high enough weight on trees, then you will, in fact, get more trees. But that could come at substantial costs. You could put in the welfare function, say, how much prostitution you want to have happen. If you really want to make sure very little happens, well, if you put that in, then we will introduce relatively draconian legal processes that will make it not happen.

SPENCER: Well, you can always put a tree signaling into the welfare functions that are both making trees.

ROBIN: Right. But how is the Bureau of Labor Statistics supposed to measure that? So it's related, say to bounty hunters, right? If we did bounty hunting as a way to enforce crime, then we'd have to be explicit about which neighborhoods count more for bounties. Whereas today, we can pretend all the neighborhoods count the same. But really, privately instruct the police force to pay more attention to some neighborhoods than others. We allow hypocrisy that way.

SPENCER: So I feel like this is one of the most Robin Hanson-nian views. That hypocrisy is valuable. So can you unpack this a little bit? Because people probably are going to react and say, well, but why would that be a good thing?

ROBIN: Well, it's an obstacle in a sense that people will not let go of their hypocrisy. So if I want this to actually be adopted, and people perceive that this will get in the way of something they want, then they won't adopt this. So I have to, in some sense, negotiate with a community. If I'm going to try to get them to adopt something for their benefit, then they need to see this as for their benefit.

SPENCER: But do you actually think people believe that they're hypocrites and sort of would actually reject this proposal because this won't let them be a hypocrite?

ROBIN: Well, they would just see something they didn't like and say, "Hey, you guys need to stop this and change it." And they don't necessarily need to connect the dots. Politicians are often trying to figure out what the public really wants and giving it to them without necessarily explaining clearly why the public wants something.

SPENCER: Maybe you can give an example, where you feel like this hypocrisy would lead to a barrier to few targets being adopted.

ROBIN: Well, for example, as I just said with something like prostitution or drugs or gambling. Those are just wild things that we have laws against that we don't enforce very strongly. We pretend to discourage them, but not really. And if you put their frequency in the welfare function, then you will find that, you will be turning a knob that actually affects how much you do enforce them. Now maybe we could make it obscure and hard enough for ordinary people to understand that it might hit some hidden knob to them. But there's a risk that the more explicit it is, the more they will object.

SPENCER: Okay, but let's walk through this in more detail. Imagine, today there's a politician who's going on against how bad prostitution is. And people (I guess you would say) would vote for that person, even if they don't actually want prostitution to be reduced if they just want to signal that they want prostitution to be reduced. So then what would happen in a futarchy scenario? Are you saying that politicians railing against prostitution would then sort of, on some level, realize, "Oh, wait, if I actually vote for this person, that will actually reduce prostitution, which is not what I want." And so they won't vote for that person.

ROBIN: Right. So think of a mayor who runs on a platform, "I'm going to clean up the city." He says, "I'm going to reduce prostitution." And then we have prostitution rates before and after elected and they don't change. That's an opening for another candidate later on to say, "He told you, he was going to reduce it, but he didn't. I'm really going to do it." And so that's a case where since you have those statistics, maybe they're being held more to an actual accounting for what they really did. But often we don't have such clear statistics. And so people are able to take a position favoring or opposing something. And it's really hard for opponents to point out, "They didn't really do that much." With this new system, they would be able to propose bills that change the welfare measure to penalize prostitution more. And then if they're elected, and we say, "Hey, how come you didn't do this thing you said," then we're making it a little more obvious that they didn't do it.

SPENCER: But do you think that would cause people to be against the system as a whole? That's the kind of piece I'm missing here.

ROBIN: Right. Because I think, if somebody is shamed into actually reducing prostitution, and then it is actually reduced, and then a lot of people get their feelings hurt and get arrested, and things change as a result of that, then they may dislike the powers and kick them out of office. They don't have to explain why.

SPENCER: I see. It's more like they don't like the consequences of changing the function. So then they're like, "I don't like the system."

ROBIN: Right.

SPENCER: I see. It's interesting to me, because it seems to me it is so unlikely this is actually why people would oppose it in advance. Or maybe you're not saying they'd oppose it in advance, you're saying that they would oppose it if they actually implemented it.

ROBIN: Right, or trials might be seen as having gone badly.

SPENCER: I see. Okay, got it, got it.

ROBIN: And then it never got up to larger versions. So I think this is true with, say, new hires. In academic departments, we're supposed to be hiring people who would just do generally well as academics, publish a lot, and bring fame to the department. But typically, whoever's on the hiring committee is actually looking for more personal benefits. They're trying to hire people like themselves, who will work with them, who get along with them. And so there's actually a conflict between the supposed goal of hiring somebody in general good for the department versus hiring someone good for the people on the hiring committee. So if we had this general measure of performance evaluation in a few years using it to hire, we'll have a conflict. Because that measure (the markets) will say, George is best for the General Department measure, but the people on the hiring committee may know that they like Sally better for their personal benefit. And now there'll be a conflict. Do they hire Sally or George?

SPENCER: This conversation reminds me a lot of your book Elephant in the Brain. How would you connect it to that? Or would you say it's related?

ROBIN: Yes, I'd say it's really related. I would, in fact, say this experience that I had of trying to design social institutions, to improve our social institutions, and then finding lack of interest in them, part of the explanation for that is that we're often wrong about the goals that we have for these institutions. And that's one of the obstacles to actually reforming institutions. And so for any area that you're trying to reform, it's important to ask: what do people actually want? And how could I design something to give it to them?

SPENCER: Right, because you might say, "Well, this institution exists to educate the public on XYZ, and therefore, if I have a proposal to do that better, people should be in favor of it." And then you propose it, and they're not in favor of it. And you're like, "Well, maybe this does not exist for a reason. Maybe that's just a false propagated myth about why they exist or something."

ROBIN: Right. The traditional economic institution design question is: figure out what people want and find a new institution that gets more of that. So if medicine is about health, and you say, "Well, how can we get people more health?" But if what's really going on is that people are pretending to want X while actually wanting Y, then you have to offer them a new institution that lets them continue to pretend they want X but actually give them more of Y. If you just give them more of X without giving them more of Y, then they won't actually be very interested.

SPENCER: So what's an example where you think the general narrative of what people want is really wrong, and you think there's a more accurate narrative?

ROBIN: In Elephant in the Brain, we go through 10 areas of life where we show substantial differences in those narratives. Say, medicine. We talk about health, where it's really more to show that you care. Education, we talk about learning the material where it's really more showing off you're being smart and conscientious and conformist. Politics, we talk about improving the nation or the city, etc., where it's really more about showing loyalty to your tribe. These are some of the major areas in life where we are not that honest about what we're trying to do. And that's an obstacle to reform in many of those areas, including politics, which is what we're talking about now.

SPENCER: Because we're on politics, why don't we unpack that one a little bit. So you're saying that basically, people talk about politics as though it's us trying to improve society and choose the best options to get the best outcomes. But really, it's about signaling loyalty to your tribe. What's the case there that that's actually what's going on?

ROBIN: If you want to in terms of the evidence, we have a lot of evidence about people's political behavior. I got my PhD in formal political theory, and in the process learned a lot about our data on politics. People, say, are interested in their politicians' positions, even if those positions don't matter for policy. People care about who they're associated with, and whether they agree with their politics. People tend to line up on a one dimensional spectrum in political positions, even though there's a vast high dimensional space. People are very emotional and not very analytic about politics. There's just a whole bunch of evidence that people aren't so attentive to the thing they claim to pay attention to, that in this context, often, it's a matter of us-them. So people, as you may notice, say, on Twitter or elsewhere, are very energized by being in favor of their side of the political spectrum and dumping on the other side. If they can find a political position that really emphasizes that difference, then they are very much energized by it; they love it. But then the other side also loves it and fights just as hard the other way. If you're talking about ways that everybody could be better off, without hurting anyone, people get bored with that. That doesn't let them fight against the other side. They want to fight more than they want to improve society. And so that often means that things that would just be generally better are so boring that hardly anybody can be bothered to put any energy toward them. And unfortunately, that's one of the obstacles futarchy faces. As you can see, it's designed not to favor the left or the right particularly here, even though it's made out of markets. It's happy to regulate markets and limit them, if that's what the speculators think would actually achieve a national welfare function.

SPENCER: There's no doubt that people spend tons of time fighting the other side, and that there's a lot of loyalty to their own side. But I think one thing that makes it more complex is, I think, a lot of people genuinely see their side is the good side and the other side is the bad side. So it kind of mixes loyalty and outcomes. So if you actually got them to predict which way society would be better, I think it mostly would predict, "Oh, if my son was in charge, it would be better. If the other side was in charge, it would be worse." Or do you disagree with that?

ROBIN: Oh, I think that's people's first reaction. And then after a while, I think they pause and reconsider. So for example, I am in a world where there's a lot of libertarians. And of course, they tend to think that outcomes would be better with libertarian policy, and that markets look kind of libertarian. So they thought that decision markets sound pretty exciting. But as they approach a very particular question about, say, global warming, taxes or other sorts of things, they tend to get a little more shy about whether, in fact, speculators will agree with them that their policy, in fact, will do better according to the simple measures you come up with. This is just a general phenomenon of betting. People are often willing to sort of make grand strong claims when they're just talking and pontificating. And then if you ask them to make a bet, all of a sudden, they start clarifying. What exactly the word means becomes a lot more important. And all the other situations and considerations that might complicate it become much more salient. And of course, that's a benefit of having these speculative markets decide things, but it's also a reason why people are not so eager to participate.

SPENCER: So somebody might say, "I'm 99% sure that if we implement this, society will be better." And then you're like, "Okay, why don't we bet? I'll give you 50 to one odds." And they're like, "Ah, no way I'm not going to do 50-1 odds."

ROBIN: Exactly. So one of the obstacles to futarchy here is this hypocrisy, but it's not the only one, I'm afraid. And I'm actually pretty interested in trying to understand what these various obstacles are. This seems to me a central social science question for us to understand. We can much more easily design these improvements than we can get people interested in testing them or adopting them.

SPENCER: So besides this kind of hypocrisy idea, do you have any other overarching theories about why people aren't interested in putting this into practice?

ROBIN: Yes. So one is that people are shy about using money in sacred contexts. Government is often seen as a relatively sacred law as well. And many people don't like the idea of money becoming more central in those contexts. Betting markets are usually in terms of money, and that just makes it all seem tawdry and profane.

SPENCER: Right. Sort of like going to Grandma's house, having her cook a lovely dinner for Christmas, and then you give her money at the end, to pay her for the dinner. It's like something icky about that to most people.

ROBIN: Yes. And the idea that money's involved also makes people much more easy to imagine kinds of foul play like you were thinking of before. And somehow we don't imagine that so much when they're, say, a government bureaucracy and people bucking for promotion. They don't really imagine that those people would lie or stab people in the back or do something bad for the public just to get promoted and get a better job. They don't think that happens much for some reason. But they think if people were betting on it, they could do all sorts of nefarious things to try to make more money there because it's more directly using money.

SPENCER: Yeah, I would also add to that, financial systems cannot be complicated. Your average layperson, I think, correctly knows that they don't fully understand the system. And maybe there's a natural distrust of systems that you don't feel familiar with that seem complicated and technocratic. Like, you could be gamed without you realizing and understanding how you're being gamed,

ROBIN: Although our current regulatory systems are, in fact, pretty complicated. I think most people don't understand how electricity is regulated, or the military is run or does purchasing. People are, in fact, part of large, complicated systems, like the criminal justice system, etc. People don't know how most of those work, but they still are mostly okay with them, when they think people are just employees following rules getting promoted.

SPENCER: There also may be a significant preference for the status quo, right? It's like, "Well, America seems to have done quite well, so far, in many ways. So, if we try some radical new system, who knows?" Maybe there's something to the thinking that, on average, new things are much worse than things that have survived for centuries.

ROBIN: Sure. But there is a substantial appetite for innovation in many other areas, like movies, or cell phones, or social media or vacation spots. Even if we're all sort of, on average, like to not change, we do often try changes, and even admire people who push changes and adopt them and experiment with them, but less so in the social areas. I think the conservative feeling that new things are going to be dangerous or problematic is stronger here.

SPENCER: It seems like a reasonable testbed for some of your ideas will be something like a DAO, one of these autonomous crypto-blockchain entities. Has this been tried?

ROBIN: Most of those DAOs aren't actually doing anything. So in general, in innovation, what you don't want to do is combine multiple risky innovations in the same venture. So if you have some innovative concept or approach, what you want to do is get it to be tried in some contexts, where they're not simultaneously trying several other risky, innovative things, because then it will be hard to tell if something fails. What was the cause? So I'd much rather just try this in very simple environments. I can give you some recommendations. I told you about new hiring; that's a very standard process, and we could do that there. I also have a simple idea — even more simple — for how you could, say, have a restaurant decide which specials to adopt every night for their menu. That would be a context where you could test it daily and get a lot of experimentation done.

SPENCER: So what would that look like in a restaurant context?

ROBIN: Here, the idea is, say you had a board, where you had pictures of 10 or 20 dishes that are on your menu. And you give everybody in the back room, who works there, a dart and they can put this dart anywhere on the board, on any of these dishes. So the idea is that, at a certain time at night, when we need to post the specials, we will take the two dishes with the most darts in them. And those will be the two specials for the night. And then whichever dish sells the most revenue that night, then whoever put the darts on that will divide up a prize, like a parimutuel. So now all people have to do is just, during the day, stick a dart on a dish. And later on, as they see (maybe) their dish isn't going to be one of the top, to move their dart to another dish. And then there's a deadline when at that time we go with where all the darts are. That's basically like futarchy here, except it's just very low mechanical cost. All you have to do is move a dart. But that would be a way to experiment.

SPENCER: I think it's a lovely idea because it's so simple. But have you tried to convince any restaurants to do something like this?

ROBIN: Not only I haven't tried to convince any restaurants to do this, I really haven't convinced anybody to do any substantial experiments with these decision markets. There have been a few lab experiments. But this is what I would say is the highest value altruistic cause, honestly. That is, we have a lot of good ideas for institutions, and we have a lot of ways that could be applied on larger scales. And then we have this limiting factor in the middle. The bottleneck is trying them out at small scales, working out the bugs at small scales, so that you could then move it up to larger scales. And the potential is enormous. So if you think about all the policies we've ever gotten wrong, because the political process just doesn't listen to good evidence. It wants to pander public opinion more. Think about how much we've lost over the decades and centuries from that. You can see there's a lot at stake here. We could move most of that away in one fell swoop. But the limiting factor is just doing small scale experiments. And that doesn't excite people. People have enormous energy to lobby and be an activist and to fight for their side over the other side. They love that. Messing with their personal lives to do small scale experiments on things that aren't particularly their side versus the other side: pretty boring.

SPENCER: I suspect that part of the reason that people push back against your ideas is that your ideas often violate the heuristics that people operate on. So one who heuristic might be, "Don't mix money in certain kinds of things like triadic. Like, it's not good to use money to decide who gets to propose a bill, that kind of thing. I think a lot of people just, on a gut level, are resistant to that. I'm wondering if that resonates with you. Do you think that your ideas do tend to violate people's heuristics?

ROBIN: Right. I was trying to understand that process better. And so actually, over the last year, I had a project where I tried to understand the sacred better. So in some sense, the concept of the sacred has been in my way, when I tried to think of these things. And I finally decided I would just face it head on and tried to make sense of it. And I think I did make a lot of sense of it. And so, I think I have a better idea for what the sacred is, and why it's an obstacle when it's allowed me to at least come up with one clever new idea. Whereby, we might be able to sort of diffuse or make use of the sacred, but it's not a general panacea, unfortunately, for how to fix all these things.

SPENCER: How would you think about what the sacred is, and then what's your approach to thinking about how you would get these adopted, even if they violate sacred intuitions?

ROBIN: What I did is I collected 64 correlates of the sacred. Things that people said went along with that, I clumped these into seven themes of the sacred. And I asked how could we explain those seven themes: what sort of theory of the sacred would make sense of that? I found a famous sociologist from long ago, who wrote a book on religion. And he had a story of the sacred that explains three out of the seven themes handily. And so then I found an auxiliary assumption from psychology called construal level theory. But when I combine that with this initial theory of the sacred, it lets me explain the other four. And together, now I have a plausible explanation for all seven themes of the sacred. So the basic theory is that we bind ourselves together in groups through shared views of the sacred. That is, sharing the same idea of some things being extra valuable and should be treated in certain ways binds groups of people together. And that explains also why they highly value sacred things, and why they go out of their way to show their value for sacred things. Then the other four themes to be explained are: why we set apart the sacred (don't want it mixed up with the profane); why we idealize the sacred; why we have a norm that you're supposed to feel and not so much think or calculate the sacred; and finally, why concrete things tend to become sacred by touching abstract sacred things. So those last four are well explained by invoking construal theory, wherein we see something up close and some things far away. And construal theory is an obstacle to the sacred because, say, if you and I want to see medicine as sacred, and you see your medical treatment up close and I see it far away, that's likely to make us see it differently. And that's an obstacle to us binding together and seeing it the same. So the hypothesis is that: for sacred things, we instead see them as if from afar even when we're close. And that lets us bind together in groups to more agree about how to see and treat sacred things. It comes at the cost of not being as accurate or careful in how we treat sacred things. But it still effectively binds us together. That was a very brief summary. But the institutional invention I came up with on the basis of this insight was to try to think about sacred money.

SPENCER: Sacred money. Okay, fascinating. What is sacred money?

ROBIN: So part of the obstacle, or the problem with money, is you can spend it on so many profane things. It's not itself sacred. So sacred money would just be money that's committed to only being spent on sacred things. It's consecrated literally toward the sacred.

SPENCER: So what can you spend it on?

ROBIN: You can spend it on health, or education, or charity, because those are sacred ends. In tax law, we actually have a list of things that you're allowed to get tax deduction treatment for. And those are, in some sense, a proxy list of sacred things. So if we could create sacred money — so the idea is you take ordinary money, and you'd take it to a sacred bank, and they'd give you back sacred money — you can only spend this on sacred things. But say a hospital could take your sacred money and trade for services, and then they could give that sacred money to (say) surgeons or janitors who helped provide their services. And they could take that money to the sacred bank and get regular money to pay for their living expenses. But they would be verifying that, in fact, this was spent on a sacred end. So now with sacred money, we could have a prediction market or a futarchy market where only sacred money could be bet there. And now, it's a more sacred enterprise. I could use the sacred money on some other innovative proposals. And we could allow sacred capitalists to have sacred money and then invest them in sacred ventures, some of which will succeed and some fail. And if they make more sacred money from their ventures, they could then reinvest that in new sacred ventures. And maybe we could all approve more of these capitalists, because they only are investing and gaining sacred money from their sacred ventures (investments).

SPENCER: It's such a funny idea, because it's almost like there's a way in which it almost seems like a parody of what an economist would say to solve it.

ROBIN: Yes. [laughs] Guilty as charged, I guess. Nevertheless, it seems we're thinking about it.

SPENCER: But it's also super fascinating. I wonder, it seems to me that if sacred money were to get too tied to regular money, it would lose its sacredness. Like, if you could convert it too easily?

ROBIN: Right. That is one of the design questions. That is, if there was just a market where you could directly trade sacred for regular money, that would be a problem, I think. Say I'm a rich person, and I take a bunch of regular money and I converted it to sacred money. I make a sacred venture, it's successful, I get a lot of sacred money in return. And then I could trade it back for regular money and buy prostitutes and yachts and all the things people disapprove of. People would think that was a bad story, and they wouldn't like that. So I think you might have to disallow the direct trading of sacred money for non sacred money.

SPENCER: Right. So you can turn money into sacred money, but not the reverse. You can only use the sacred money for sacred purposes. Presumably, you'd give it to charity. That would probably be an allowed purpose. It's really interesting. You mentioned that you had another proposal based on this. What's that?

ROBIN: So I wanted to pitch this idea because it has no losers. So many of these proposals that I come up with, you could think, "Well, somebody won't like this, because you're taking their jobs, you're taking their lunch at the moment." So this proposal isn't as grand but it's still pretty big. But arguably, nobody loses from this proposal. I'm gonna call it 'tax career agents'. At the moment, many people have agents — like in music, or in acting, or in sports or writers — and they pay these agents (say) 10 to 15% of their income, and they advise and promote their clients. And that's good. Most other people don't have such agents. And people make a lot of bad career choices and educational choices for lack of good advice about what to do. So my proposal is to give everybody a career agent who, on average, gets 20% of their income. But I can do this for free. For anyone who wants it, it won't cost anyone anything. That's my big pitch. And you can think, "Well, how can you do that?" And the key story is, at the moment, the government is your tax career agent; they just do a bad job. So on average, in the US, the government takes 22% of income. So it is, in fact, your agent. It should want to advise and promote you in your career, but it doesn't. So the key proposal is going to be to transfer that role to somebody else who will do a better job. That's the key idea. So how can I transfer this role? At the moment, governments tax. But they spend more than they tax. And so in order to pay up for the difference, they borrow money — which is basically taking tax revenue from the future and moving it to today — so you can pay for current expenses today. Instead, I'm going to propose that for an individual person — say, John Smith — we hold an auction, and we say, "Who wants to get all of John Smith's tax revenue from now on?" John Smith will send that money to the government, and we will turn around and send it to you if you win the following auction. And now, whoever wins that auction becomes John Smith's tax career agent. They will now get 20% of John's income, and they will have an incentive to advise and promote him. And the government gets that revenue in the auction, which is now a substitute for the money they would have borrowed. And as I promised, nobody loses. The agent isn't even created, if John doesn't want it. The agent is willing because they volunteered. The government gets their payment in a different form. Nobody loses. And John gets an agent. That's the idea,

SPENCER: The government, basically, instead of collecting the tax revenue over time, they get an upfront payment. Basically, that would be equivalent to some kind of discounted cash flow of the future taxes, right?

ROBIN: Right. And it's equivalent to the money they would borrow instead, and promising to pay back later, based on tax revenue.

SPENCER: I guess you could argue that they may even make more on average, because they're presumably not willing to sell it unless they think it's worth at least the amount of future tax flow.

ROBIN: Right. The person who wins this auction is the one who most believes they can improve the value of this asset. That is, that they could advise you, you would listen, and you would do something different, and then make more money. And the government, on average, would then make more money too, because the auction revenue would go up. So the government gets more money, you get more money, the agent gets more money. Everyone wins.

SPENCER: So what do you think would actually happen in practice? Do you think if this was implemented, there would be large corporations that all they did was buy people's future tax revenue, and then try to help them with their careers?

ROBIN: I certainly think a lot of capital will go into this. It would become a new asset class, basically, like real estate or stocks or bonds. And people would spend a lot of time and trouble asking, "How do you improve the value of this asset?" So, a key point is to get this person to listen. If this person is suspicious of large corporations, then large corporations aren't the best person to hold this asset. Maybe they have a sports hero, maybe they have a church they trust, maybe their uncle. Whoever they would actually trust, and who could actually give them the best advice, they would be the best person to hold this asset. And the market would move the assets to those people.

SPENCER: Now can people buy their own future tax revenue?

ROBIN: Yes. So that might be even the best use. That is, you could just lower your future tax rate by prepaying this, in some sense, this way. And now, in the future, you don't face the discouragement that your income is going to be taxed. Whatever income you make, you get to keep because you paid for that ahead of time,

SPENCER: It's like paying all your taxes in one go.

ROBIN: Exactly.

SPENCER: But wouldn't credit be a strange incentive, though, like they are a fake signal that you're not gonna make much money, and then buy your own future taxes, and then actually go make a bunch of money?

ROBIN: Sure. But it'll be hard to fool the markets in general. And it doesn't seem like such a terrible loss for society if you do manage to do that once in a while. In fact, you might have actually done the other thing.

SPENCER: So how did people react to this proposal?

ROBIN: A lot of people are afraid of this tax career agent as hurting them. They think this person will make malicious gossip about them, or sabotage them, or somehow hurt them in order to make money off of them. Now, this is not what we usually see in athletics, or music, or other places where there are agents. But somehow people imagine that here.

SPENCER: In a normal agent situation, you can fire your agent, right? Would there be a mechanism like that? Let's say you really didn't like [inaudible] your tax revenue, or they were trying to coerce you or trying to use legal means to manipulate you or whatever?

ROBIN: Right now, if you make it clear to them that you're not going to listen to them, the value of the asset to them is lower. If you tell somebody else, "If you buy that asset, I will listen to you," for them, the value of the asset is higher. So if they are convinced, then there's a profitable deal between them, where the first sells up to the second. So one mechanism is you just make it clear that you're not going to listen to them, you would listen to somebody else, so they should just sell it to somebody else. We can make that even more likely to happen using something called a Harberger Tax, a self set property tax, where basically they would have to set a price on this thing all the time and pay a tax based on that price. And then if somebody else paid that price, they just get it. And that's a way to make transfers easier in real estate and in this context. So there's a number of things we can do. Of course, we can also just say, the agent isn't even created until you approve of it. And you might not even approve it until you get somebody to say, "Yeah, I'll bid on it. I intend to win this auction. I intend to be your agent."

SPENCER: So the idea there would be that the government has to approve the transaction, but also the person who they just represent has to approve it. And that seems to create a greater safety around it.

ROBIN: Right. But even so, people are afraid of this agent. Somebody who actually paid half a million dollars to be your tax career agent, who is therefore very invested in understanding your career and figuring out how to promote it, and plausibly an expert at it (given that they were the one willing to pay the most for this asset). Nevertheless, people are afraid of them.

SPENCER: It's interesting, because if I just think about what I would predict, it seems like a lot of times, it might be fine. The agent might use really good methods to try to help you make more money, like offering you opportunities, or trying to help you find a better job or giving you training. But then you can imagine in some percentage of cases, they might do things that actually create psychological pressure, or use manipulation, or legal threats, or gray area threats (sort of right on the border being little things like that). And so maybe that's what people are concerned about?

ROBIN: But interestingly, note that you already have many people in your life who have an interest in you. You have a spouse, you have a father-in-law, you have parents, you have a coach, you have a teacher, you have an employer.

SPENCER: But you can fire all of them essentially, right?

ROBIN: But they can all do those things to you as well. They could all manipulate you, or lie to you, or hurt you somehow, or all those things.

SPENCER: And sometimes they do, but you could also leave them, right? And so I think, maybe if there was a clear mechanism by which people could let them force their agent to leave, then it would be more analogous.

ROBIN: I just described a mechanism that way, which is Harberger tax. Basically, everybody has to set a price. And then if you pay the price, you get it.

SPENCER: Right. And that seems to me to make it a lot safer. Although I wonder there, again, often it feels like you propose these very technical solutions, which are well thought out. But I wonder if just the average person has this sense, "Well, I don't really understand this technical solution in detail. I don't know that I could really evaluate how this would actually play out."

ROBIN: Let me tell you the last explanation I have for reluctance on these topics. That's the idea that we want proposals for innovation and change for our society to come from a certain class of people — not everybody — who I'll call 'elites'. And we only really want to listen to proposals from elites. And then we mainly want our institutions to be ones where elites take a prominent role. We mainly want to trust people to run things, not systems to run things. And part of our system of innovation is we trust particular people to make innovation suggestions.

SPENCER: Are you saying that people have a preference for a sort of elite group of people making decisions? Or just people in general, who you're calling elites?

ROBIN: Yes. We have a predefined concept in our minds of what is an elite. We agree on that concept. And we do tend to want positions of power and influence to be filled by that sort of person. And we're willing to accept some kinds of governance and rules and impositions on us when they come from elites, but we're not willing so much from other people. And so then a criticism might be that people are not very assured that elites would be filling the roles here. And they don't want to hear this from me, because I'm not an elite. I do think in my experience of seeing when new institutions have been adopted, or even experimented with, typically they come with some substantially elite backer, who's the visionary advocate for them. And people are especially interested in liking that person and identifying with them before they're willing to consider their proposals

SPENCER: So who's an elite by this analysis. What do you mean by that word?

ROBIN: Well, elites would just be our shared sense of who's admirable. Celebrities are somewhat elite. Rich people are somewhat elite. People with high degrees are elite. People at elite institutions are elite. So if you think about elite events (like Davos) or elite conferences, you include a lot of different kinds of elites, but you're still very selective about who you include. And you include people who, when other people see those people, they say, "Yeah, they're elite." So it's a shared judgment — something like social status — of who qualifies as one of the top people.

SPENCER: Is it shared just within an in-group? Because, for example, conservative elites might be really different from liberal elites.

ROBIN: There's a lot at stake in who's seen as the elite, so people often try to pull that definition in their direction. So scientists may want to poo-poos actors as elites and say, "Only scientists should count." And other people will push for their kind of elite. So there's a lot of politicking that goes trying to redefine who should count as elite. And a lot of energy even to sort of character assassinate people and kick them out so that they aren't competing with you in the elites. But I still think ordinary people have the sense that you need someone reasonably elite to fill these roles, and they are just not very tolerant of people who don't look right for the role.

SPENCER: So you're saying people view you as not elite, and therefore, more likely to ignore your proposals?

ROBIN: And also the speculators in a futarchy, they might not see them as elite, because they're making these key policy decisions. But how impressive do they look? Do they look sharp and well-dressed? Or are they slovenly and sitting in their pajamas while they're trading? That image bothers people.

SPENCER: Do you chalk this up to a sort of evolved heuristic of who we want to trust as leaders, or something else?

ROBIN: But I think humans have, for a long time, used status (including dominance and prestige) to adjust our behavior. This seems to be a robust feature of human behavior across societies and time. We copy other people's behavior, but we prefer to copy elites' behavior rather than average behavior. We want to be copying the best. We want to be seen as part of elite high status. We are all, in some sense, looking to see how we could rise in status (what markers we could acquire for ourselves to make us seem more elite). And this is just an important part of human behavior everywhere.

SPENCER: So if this is sort of a fundamental reason why people are resistant to some of your ideas, do you have ideas how to get around it?

ROBIN: Well, you could just be more explicit about requiring some sort of elite criteria to fill roles. So maybe not everybody could be a tax career agent, right? Maybe you have to get a sort of elite approval — some sort of a degree or a regulatory mark of approval — so that you can fill a certain role. And then that might reassure people that they are sufficiently elite. We do that for many professions, like law and medicine, etc.. Journalism even. We make sure that people who fill certain roles tend to be elites: go to the right schools and look the right sort of way. And we have many ways to enforce them.

SPENCER: You've touched on a few different theories of why your ideas aren't more adopted. You mentioned the elite one just now. Earlier, you mentioned that some of your ideas might violate people's sense of the sacred. I guess, when I think about your ideas of why they aren't adopted, both of those seem like part of the explanation to me, but I think I would add in some other things there too. One thing is that I just think your ideas are so different from what people usually think. And I think any idea that's really different from the mainstream is going to struggle to be adopted. And as you pointed out, you want to kind of make it not change too many variables at once. And I think that's the right intuition. But still, your ideas are really, really different from what we're used to. So there's just a really strong default bias. We've also talked about how people tend to be more comfortable with people in control than systems. One thing that reminds me of is the way that some algorithms have been run now to help decide (like) recidivism predicted rates that judges could then rely on when they're deciding on parole and stuff like this. And people are really resistant to this. Because even if judges themselves may be biased, people are really deeply uncomfortable with an algorithm being involved in that decision. And they'd rather a human being biased on some level than an algorithm be biased. So I feel like these are all factors. But I also feel like there may be an additional factor, which is something like, there are some people that are just bad people, like they're going to try to break a system to exploit it. And it feels to me, when I consider some of the ideas that you propose, that's where my mind goes: what is the worst human being in the world gonna do with the system? And I'm not always that confident that there's not some horrible way to break it that ends in some really bad calamity. So, I'm curious to hear your thoughts on that.

ROBIN: I suspect you're right. But I suspect it's not that you're queuing off of some particular feature of my proposals that makes it seem like some bad person would do a bad thing, I think it's more likely you're not seeing the usual positive features that usually reassure you that something's okay. And that's more what I'm trying to understand. The elite thing is an example of that category. If you see that elites are associated with this, and they're in charge, that makes you feel more okay with it, all else equal.

SPENCER: And maybe it's more similar to systems that we've seen work in the past. Maybe that also gives a degree of comfort like, "Well, that hasn't been gamed so badly that it's ruined everything," right?

ROBIN: Right.

SPENCER: There's also something about sort of more technical arguments that I feel may bring this up more, where you're like, "Well, look, I made this long chain of reasoning that says this thing is okay." But it hasn't been vetted against the real world. I can't point to certain specific implementations.

ROBIN: So remember, my main audience is somebody who might do small scale trials. Once you have small scale trials that are successful, I don't need the other arguments anymore. We can just point to success in the trials. So the question isn't so much, "What would it take to convince an ordinary person about this?" It's more, "What does it take to convince someone who might be willing to do a small scale trial?" And for them, you think they might have more tolerance for complexity of argument. In many areas of our world, there are people who are eager for very creative, unusual proposals. They want to (I don't know) think about UFOs, or they want to think about fusion energy or some new approach to AI. There's a lot of people who, when they see something very creative and unusual approach, are extra excited by it. And there's a lot of people who say that their mission in life is to try to improve the world. And they're all about that. And they will talk to you at great length about all the things they are trying to promote that they think would make a big difference. So in the context of those two factors, I still think it's a little puzzling that you can't find some of those people to be interested in taking a creative suggestion for improving the world that would just require some small scale experiments, and get those experiments tried.

SPENCER: I can understand why you'd be reluctant to try to roll out your proposals in already kind of controversial and cutting edge and strange things like DAOs. But it just seems to me the group of people who are doing DAOs are already doing weird experiments with social decision making, and so on. And it's like, for them, this just might be another cool social decision making experiment that could slot in with their existing work. Whereas like your typical restaurant owner, they're just trying to figure out how to get a few more people in the seats, rather than experimenting with entirely new systems.

ROBIN: Well, I don't really turn anybody down. So if somebody is serious about wanting to do an experiment, I'm happy to work with him. People have talked about doing this for DAOs, but not really actually done it.

SPENCER: It's funny, because I think, often, if you're trying to convince someone to do something new, you have to bring it to something extremely practical, that they're like thinking about on a day to day basis, right? It's like, if you're pitching the restaurant owner, you're gonna be like, "Okay, you're trying to put more people in the seats. If we can get better specials that make more money for you, well, that's a huge win on a day to day basis. I've simplified this to five little things you have to do, and then you're gonna make more money on your specials." Right?

ROBIN: Right. But the main thing I'm hoping is somebody who's listening to this, maybe, tell somebody else that, "Hey, there's this really exciting prospect, and it would just take a little person like you to try it."

SPENCER: Yes, you listener, maybe you will be the first to implement some of these ideas.

ROBIN: Right. I've been pitching this for many decades. I started writing about this in 1988, I guess. So we're going on 35 years here.

SPENCER: So there must have been some examples of some of your ideas being implemented. What's the closest thing that's happened to implementation?

ROBIN: Well, people have made prediction markets in some areas. And they've done prediction markets in firms for things like deadlines and sales and things like that. And we have some interesting experiences there, that are somewhat of a clue to what are the obstacles, actually, for prediction markets organizations. The most successful prediction markets organizations are ones on topics the farthest away from what any management would express an opinion on. That's the key correlation. So that suggests the key problem is that managers are in the habit of expressing opinions on things all the time, then you introduce a prediction market, and then the prediction market has an opinion that differs from their opinion. And then the prediction market is proved right, and they're proved wrong. And that makes them look bad, and they don't like it and they get killed off. We've actually seen that scenario a lot. So the problem is that managers often use what they say, politically, for various purposes. They aren't trying to be entirely accurate. They have other purposes. And then the prediction market is like a dumb artist who's just very smart, but very socially stupid and just says the truth every time, no matter what happens. You wouldn't accept a person like that in the C-suite. Around the C suite table, you're not going to put someone who just blurts out whatever the truth is, whenever the subject comes in, without thinking carefully about who's acts that might gore. But that's what a prediction market is. It just says what it thinks constantly, all the time. And so that suggests that one of the biggest obstacles for prediction markets in organizations is the fact that it's going to conflict with the other usual political processes about who says what, for what advantage, which is why we need a lot of experimentation to find a better approach. So I like the analog of cost accounting. As you know, it's possible not to do cost accounting. So imagine a world where nobody did cost accounting, you come along, and you say, "Hey, let's do cost accounting." Who do you think would object?

SPENCER: Presumably, whoever is going to look worse from it from implementing that.

ROBIN: Right. Well, somebody might actually be stealing something. And somebody might be worried about the impression that somebody thinks they're stealing. You're basically accusing people of stealing something when you say, "We need to do cost accounting here." Now imagine a world where everybody does cost accounting and you say on a project, "Hey, let's not do cost accounting." Now who looks bad?

SPENCER: So is the idea that you basically have to find a way to implement these that doesn't make anyone powerful look bad? Is that the takeaway?

ROBIN: Also, there's multiple equilibria, right? The world in which nobody does cost accounting, it's hard to introduce. But in a world where everybody does, it is hard to take it away. So similarly, if on every deadline, there was always a prediction market on it, that if you said, "Hey, let's not do a prediction market on this deadline," you'd basically be saying, "Can we just not talk about the fact that we're not going to make this deadline?" That would not look good, right? But if you're in a world like we are today, where every project with a deadline does not have a prediction market, then if you say, "Hey, let's do a prediction market on this project." What people hear is, "We're not going to make this deadline. Let's show that." You're questioning what their current consensus is by asking for the prediction market, which also doesn't look good.

SPENCER: Yeah. I think a recurring theme in your work is that people say things for different hidden reasons. And with a deadline, it seems like a big reason people give deadlines is a forcing function. It's not so much a prediction as a way to try to make something a self fulfilling prophecy.

ROBIN: Right. But you might think they would want a truthful estimate of whether they're going to make the deadline, but often not so much.

SPENCER: Robin. Well, thank you so much for coming out. This is a fascinating discussion.

ROBIN: It's been great to talk to you. I hope we both inspired people a little, even if we've given some downer descriptions as well.

[outro]

JOSH: A listener asks: "A lot of our knowledge about psychology and social science is sort of locked up in academia. How can we make psychology more useful and applicable and pull those insights out of academia and into the everyday lives of people?"

SPENCER: Well, this was definitely one of the inspirations behind our website, clearerthinking.org, where we actually try to take insights from different fields like psychology and economics and math and so on and bring them to people in a useful way. Our approach is trying to make these interactive experiences where we actually use examples that are relevant to our audience, relevant to users. So trying to apply the principles to your own life, trying to apply it to situations you actually have rather than just abstract ones, but also making it interactive so it's not just reading a paper, but actually trying to use the content as you learn it. So that is our approach. I think there's way more room for people to do this with many other things. I mean, on our website, we cover about 70 different topics, but there's so many potential topics this could be done with and also ones well outside of psychology that could really benefit people to learn about.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of these platforms:


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: