CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 119: Voting method reform in the US (with Aaron Hamlin)

August 25, 2022

Does the US have one of the worst implementations of democracy in the world? Why do people sometimes seem to vote against their own interests? Is it rational for them to do so? How robust are various voting systems to strategic voting? What sorts of changes would we notice in the US if we suddenly switched to other voting systems? How hard would it be to migrate from our current voting systems to something more robust and fair, and what would be required to make that happen? Are centrist candidates always boring?

Aaron Hamlin is the executive director and co-founder of The Center for Election Science. He's been featured as an electoral systems expert on MSNBC.com, NPR, Free Speech TV, Inside Philanthropy, 80K Hours, and Popular Mechanics; and he has given talks across the country on voting methods. He's written for Deadspin, USA Today Magazine, Independent Voter Network, and others. Additionally, Aaron is a licensed attorney with two additional graduate degrees in the social sciences. You can learn more about The Center for Election Science at [electionscience.org(https://electionscience.org/) and can contact Aaron at aaron@electionscience.org.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Aaron Hamlin about organizing a democracy, voting methods and ballot access.

SPENCER: Aaron, welcome.

AARON: Thank you, Spencer.

SPENCER: Some people say that, for a democracy, the US has one of the worst ways of deciding on its leader. Do you want to tell us whether that's true and why you think that's true, if you do?

AARON: I would say that what some people are saying is probably right. There are a number of ways that we can organize a democracy, and the way that we do it is not very good. When thinking about this problem, I think one of the ways to look at it is to look at it from its fundamental components, which is the way that we vote. When I say the way that we vote, I mean, like when you go into the ballot box, and you see the ballot in front of you, and you read the instructions and it tells you how to check off these candidates, and then how that information is calculated to produce a result and what's ultimately binding, or what determines who is elected to those particular offices. And the way that we do that is really bad.

SPENCER: Right, so in the US, let's say it's a presidential election, you go in and you pick the name of one candidate. What's the name of that? Is that called “plurality voting” or “first past the post voting”?

AARON: Yep, both of those are correct. In the voting methods world, we are terrible at naming. And the confusing part is some voting methods have a bunch of different names. Both of those are correct.

SPENCER: Got it. So how did you come to think of this as a major problem?

AARON: I think for a lot of folks, the idea of a voting method is largely invisible. We just think like, “Oh, this is just voting itself,” without really recognizing that there are all kinds of different voting methods and this is just one way of offering information on their ballot and having it calculated. The concept itself came on my radar when I was in graduate school (this was during the 2008 primaries), and I was talking with my classmates — we were all talking about who we were going to be voting for in the primaries — and my friends were also all part of the student group that was for the health care reform effort. And so I kind of expected everyone to be supporting candidates that were in line with this health care reform that we were in this group for. And to my surprise, that was not the case at all. In fact, people were talking about voting for people who were opposed to the reform that we were interested in. And, of course, when it got to me, it's like, “Well, of course, I'm gonna vote for the candidates who actually support this reform.” And the kinds of responses that I got back were things like, “Well, you’ve got to take this kind of incrementalism approach.” Or “if you vote for those folks, they're never going to win.” And instead, you should vote for these folks instead. And maybe in some number of decades, you'll get what you want. And that was really disappointing to me, and so my takeaway was, well, if my friends here who are really engaged, aren't going to vote for people that support their interests, then really who will? And that's really when the concept of the voting method came on my radar, because it was becoming obvious that there were other factors that were really pushing them to behave against their interest, and that's really when this concept really came on my radar.

SPENCER: So what kind of arguments did they give for why you shouldn't vote for the person who supports this policy that you all really cared about?

AARON: Well, unfortunately, with a lot of policies, some of them can be really popular. But if, say, a major party isn't on board with it, you may get candidates who don't have a lot of support. And so the types of casts that would support this particular policy — and you can imagine your own policy, I'm sure; we all have issues that we care about, that aren't necessarily put in high regard, or pushed heavily by major parties. And so they were saying, “Well, if you vote for this candidate that you really like or who actually supports the policy, they're not going to win, and so you're throwing your vote away.” And if you do that, then there's this other candidate who not only disagrees with you on the issue that you care about, but really will disagree with you on a whole bunch of other issues that you also care about. And so it's much worse to support this candidate that you really like and risk this other candidate (who everyone really doesn't like) and risk them winning. And so, it was a lot of “supporting your favorite is gonna help this other person you really don't like win” and then this weird strategy of incrementalism where it's like, you choose this other candidate who really doesn't care about your cause that much anyway, but they won't be as vehemently opposed to it as this other candidate. And so, over time, you'll maybe gather inches along the way on your multi-mile long goal. It felt kind of weird and so far away from where my goals were, that it seemed bizarre to me. But this bizarreness is just so commonplace.

SPENCER: So can you walk through a real example where these kinds of concerns have happened with the current voting system and how the current voting system enables that?

AARON: I think perhaps the classic one (and it's becoming dated now as I grow older) is the 2000 presidential election in the US where you had George Bush, Ralph Nader and Al Gore, all running together, and it coming down, in that particular instance, to Florida, where if voters in Florida had voted for Gore instead of Ralph Nader, then it would have changed the outcome of the election. Now, of course, there are a bunch of other things involved there, such as there are a bunch of other liberal third parties in Florida. Had those voters chosen Gore instead, it also would have changed the outcome of the election, because the margin of victory in Florida was by such a small margin. So there are a bunch of factors. There's also the question, “Had those voters in Florida not voted for Nader, would they have voted at all?” But assumedly, had they been given some flexibility with the way that they were able to vote, enough of them would also have shown support for Gore as well. This is one instance where, by voters having other options where they had multiple candidates who were similar enough to each other, the vote was divided between them. And so as a consequence, you had a candidate who really didn't have the consensus of where the electorate was.

SPENCER: So the idea is, you might want Nader to win, but you think to yourself, “Well, almost certainly, Nader is not going to win. And I really don't want Bush to win. So I vote for Gore, because that's the only way my vote sort of counts.” Is that the reasoning?

AARON: That's right, yeah.

SPENCER: And do you think that that is rational given the system we have? Or do you think that they're making a mistake to reason that way?

AARON: Unfortunately, I think they're rational in the way that they're doing it, because the chances of Nader winning are so small that, by supporting Nader, they are casting a vote for an extremely low-probability event, whereas voting for Gore is much more likely to be material in terms of him winning versus the candidate that folks didn't like winning. And here, we're talking a little bit from the perspective of the left losing and the right winning in this circumstance. But it could just as easily be switched around, so that’s the other way. It just happens in the circumstance we're talking about, it is vote-splitting among the left; you could just as easily have candidates have their votes, but along the right as well and have the same dynamic for voters.

SPENCER: So most people argue that there's still value for voting for a third party, even if you know they're going to lose, because it somehow gives kind of more sway to that third party or allows them to kind of raise to conversation topics the third party cares about, because the other candidates see that they can win votes by talking about that topic. Do you think that there's legitimacy to that and, if so, how much?

AARON: I sympathize heavily with that argument. And in fact, I tend to behave that way, personally, with the way that I vote. It also relates to how seldom I vote for folks who win. But I have a lot of personal sympathy for that approach. And I think that factor goes into what makes the voting method good. So oftentimes, when we think about a voting method, we think, “Well, we want to make sure we get a good winner out of it.” Obviously, that's a really important role of the voting method. But another important job of a voting method is to make sure that it measures the support of all the candidates for the reasons that you're laying out, which is, we want to make sure that when a candidate brings ideas to the table, that those ideas are measured accurately in terms of support for that candidate. And if a candidate is the only one bringing particular ideas to the table, and they get like 1% or some nonsensically low support level, it sends a message that there's not a lot of support for those ideas. And if that's incorrect, that's a terrible feedback loop for us, because it's telling us the popular ideas are actually not that popular, which is incorrect. And so we want to make sure that a voting method not only does a good job in terms of giving the right winner; we also want to make sure that it does a good job of measuring the correct support for all the candidates and there are also positive feedback loops for that. So, for instance, if you do a good job measuring support for a candidate, it makes it easier for them to run the next time, those ideas get more traction, more people hear about it, and more people can be more inclined to support that candidate. So it's not something that we can take in isolation; these are things that feed into each other.

SPENCER: That's an interesting use case of voting as a way of expressing preferences that other people can be aware of, so you know how many people prefer different things, and then people can act differently based on that. But, stepping back, it seems like the main purpose of voting is, in a sense, to figure out what the will of the people is, right? So in some sense, you have a whole bunch of people who want different things, and you need some way of combining their individual preferences into a group preference. Is that accurate?

AARON: Yeah, I’d say that’s exactly right. The main job of a voting method is to take a bunch of people's preferences and aggregate them into a single decision.

SPENCER: Now, I think one thing that is surprising to a lot of people is how many different ways there are to do this. Because you might think, “Well, okay, if you have everyone's individual preferences, can't you just sum them up or something?” Do you want to give us some intuition for why this is actually a really hard problem of how do you aggregate individual preferences into a grand unified preference?

AARON: Yeah. First, when we're thinking about this, it may be helpful to break down what a voting method is. When I think about a voting method, I think about it in multiple components; one component is the information component, that is, what information are you providing that's going into the system? So it could be choosing one candidate, it could be choosing a bunch of candidates, it could be ranking candidates, it could be scoring candidates on a scale.

SPENCER: This is what each person does in the voting booth, right?

AARON: That's right. When you get your ballot, the type of information that you're putting down on that paper gives your inclination of preferences for these candidates. So that’s information that you're putting down. And then secondly, it would be, well, you've got that information, you've got to do something with it. And so you're applying some kind of what we would call an algorithm to that information. It could be simple addition, so you’d just be adding up the votes for each respective candidate. If you're dealing with ranking information, you can do different things with that. You can simulate runoffs with that information. You can simulate head-to-head matchups with all the different candidates.

SPENCER: What is a runoff? Do you want to explain that?

AARON: Yeah. So a runoff normally is when you take some top candidates — often it's two — who had the most votes, and then you take another round where they go and are competing against each other, head-to-head. Now with ranking information, you can use that information to simulate these runoffs and there are different ways of doing that. So you could do it sequentially and take off the candidate who has the fewest first choice rankings and then look to the remaining candidates and transfer those votes over, and keep doing that until there are only two candidates. So that's one way of using ranking information, for example, to simulate a runoff process. But you can do all kinds of different things with this information. And that second part is what you do with that information to determine who the winner is. And then I would say an applied third component is how that support among all those candidates is reflected. So not just deciding the winner, but being able to see the results table as well, and who the winners are among that results table, and how much support each candidate has.

SPENCER: Got it. Right. So when someone goes into the voting booth, you could have them, say, just pick one candidate like we do here in the US most of the time. You could have them, say, pick all the candidates they approve of, to be approval voting. You could have them score each candidate, let's say, on a scale from zero to 10. You could have them rank candidates saying, “Here's my first choice, second choice, third choice, fourth choice” and so on. And then once you have picked one of those, you can then say, “Okay, how do we aggregate them? We can sum them up.” There's a whole bunch of ways to just take these results and churn them into a single number, which decides who’ll win, or turn them into a single algorithm, which decides who wins. Is that a good summary?

AARON: Yeah, that's right.

SPENCER: Okay, cool. So, as I understand it, there are some things that make designing a good voting system hard. For example, there's the issue of gameability, where — you know, we talked about this with the voting system we use today, which is that — people may not express their true preference.

AARON: Yeah, that's right. So in addition to giving that information, there's also the issue of, is the information that you're getting — not only is it complete, but is it accurate to begin with? For example, if the information you're gathering is the one candidate that they want to see elected, then you're missing out on all the other information for the other candidates. But not only that, the one candidate that they're selecting may not actually be their favorite candidate — if they suspect that they're throwing their vote away, for instance — and that one candidate that they're selecting is not going to be able to have a high likelihood of winning. And so in that case, you're getting straight-up false information. This is an issue, really, that all voting methods face to some degree in terms of a voter providing inaccurate information, to try to maximize their own utility at the cost of the group selection.

SPENCER: Right, but someone who has a predilection towards utilitarian thinking might say, “Well, why don't we just score each candidate from zero to 10, based on how much utility we get from them winning, and then we can just take the sum of those, and that sort of maximizes the total utility.” But then you start to think about the game theory of that and realize, if that's true, you could have candidates telling people, “Oh, if you give me a 10, and you give everyone else a zero, that actually maximizes my chance of winning.” So that's actually a better strategy, and suddenly, now, people are not thinking about how much utility it represents but thinking about how do I game the system to maximize the chance of a particular candidate winning?

AARON: So having that particular approach, telling them to, say, vote on a scale of zero to 10, and adding that up, that particular voting method falls within a class of what we call ‘cardinal systems.’ And it's a voting method called ‘score voting,’ also known as ‘range voting.’ And that's actually a pretty good approach. And like you mentioned, like all voting methods, there are issues with strategy. So they're using more extreme scores in some cases. And it's interesting, like, here, the example that you laid out suggested that candidates or parties would say, “Hey, all you voters, this is the way that we want you to vote.” And sure, candidates can do that, parties can do that. But it's also — we have to remember — up to the voter to maximize their own interests, their own utility, and it may not coincide with even their preferred party's utility, because we have preferences that are more complicated, that can scan across just the one particular party that we happen to prefer. And so it may not be in the voters’ interests to blindly obey what a candidate or party tells them to do. And when we, in 2016, looked at different voting methods and how voters would respond to different candidates under the 2016 US presidential election. We did see this effect of a lot of more extreme scores, but not to the degree that we would have suspected. So we see a good number of people who do vote strategically. And then we also see a good number of people who are a bit more honest with their selections. And when we see this overall, it tends to — in the aggregate — still kind of work out for us, at least with respect to this particular scoring method.

SPENCER: Can you talk about the methodology there? Because I have a concern. As you pointed out, people are in the voting booth alone, the candidate can't really tell them what to do. But that being said, people are often influenced by candidates. So I am concerned about the game theory, the long-term game theory of that; even if the first time seeing it, people are not strategically voting, that they might come to do that. I also — maybe I'm wrong about this — but it seems to me that even if you're just maximizing your own rational interest, there can be an incentive to strategically vote. Is that true?

AARON: Yeah, there can be. And I think in terms of, for one component, there's always strategic voting in every voting method. What we have to consider is, how robust is the voting method to strategic voting? That is, in the face of strategic voting, how much is that material and changing the result or changing the reflection of support for all the candidates? For example, when we looked at instances where, for example, approval voting has been used — which has been used in St. Louis, and Fargo, North Dakota — in those instances, we still saw it match up with kind of like an honest scoring, where we asked respondents in a particular study to say how they actually supported the candidates, so just to be honest. And we could see what the discrepancy was between the honest approvals where they picked as many as they wanted, versus this more honest control measure, which is using a scale. In this instance, we used 0 to 5. So, for clarity, with approval voting, this is when you select as many candidates as you want, and the candidate with the most votes wins. And for clarity again, there's no ranking in this, you can imagine the difference between a checkbox and a radio button online. This is like moving from that radio button when you can only select one to a bunch of checkboxes where you can select all the candidates that you want and it's just whoever, whichever candidate gets the most checks in their boxes, that's the candidate who wins. In this instance, it does a pretty good job of mirroring that and we found the same thing when we looked at it for the 2016 presidential election, with the score voting one, where we asked them to vote by scoring each candidate on a 0 to 5. And then also to a large degree with approval voting, it was right behind score voting in terms of accuracy, in terms of mirroring the control measure that we included. And these are instances where all the voters still have the opportunity to vote strategically. And some of them do, but in the aggregate (at least with this particular set of voting methods), it still seems to work out well, on average — which the average is what we care about, because that's what's determining the winner — the amount of support each candidate has on average.

SPENCER: Can you talk a little bit about the methodology they studied, because I'm just wondering, how do you actually try to answer these questions in practice?

AARON: Yeah, with the 2016 study, which we've been using as a model, it took a bit of a novel approach. Historically, in voting methods research, a lot of what would be seen would be what's called a between-subjects design. You'd maybe get one study that asked different groups of people about different voting methods, or one study will look at a particular election for a particular voting method and then another study will look at the same election with a different voting method. What we did — which was a bit different — was, we used (for one) what's called a within-subjects design. And what a within-subjects design means in this instance is, we asked each respondent — a person who agreed to fill out this survey for us — we asked each respondent to say how they would vote on each candidate with each individual voting method. So if you are a respondent, you'd be asked, “Okay, here's the selection, here's approval voting; how would you vote under this voting method? How would you vote under the same voting method with our ‘choose one’ voting method or priority voting? How would you do this with score voting? How would you do this with instant-runoff voting, also known as ranked-choice voting?” Each respondent would say how they would vote under each of these different approaches. And what we did with this — on top of each respondent saying how they would vote under each of these different approaches — we asked them another question, which was, “Okay, now that you did that, what I want you to do is honestly assess each candidate on a scale of zero to five. And we want you to say honestly, how much you want each of these candidates to win, using the scale.” And we can take that honest assessment scale, and then kind of superimpose that over each of the voting methods and see what that discrepancy is, to see each voting method’s accuracy in terms of how closely it hones in on the score.

SPENCER: Got it, that's really helpful. Yeah, I find that very useful in terms of providing evidence about how people would react to different voting methods. But I will still say that I feel like there's a problem in the real world where you have adversaries trying to game things that might not be the same behavior as what people do when they first see something. That's not really so much a critique of the study, because you gotta start somewhere; it sounds like a useful contribution. But I do want to flag that I feel like I would love to see studies in contexts where there actually is an incentive to lie and people are used to the voting method.

AARON: We've also done this in election 2016, which didn't use a lot of the voting methods that we tested on; it used the ‘choose one’ method. But in places where, for instance, approval voting is used — in St. Louis, and in Fargo, North Dakota — we did the same kind of approach. So we saw again, this closeness between how people said that they were going to vote on Election Day, and we saw the similarity between that and this honest assessment score as well — and this is within the context of actual elections. And there are different approaches to do that. When we're thinking about research methodology, there are often trade-offs in doing one approach versus the other. For example, one thing that we're considering in the future is doing the measurement after the fact, as well as before the fact, which is what we'd done before — before, we've done before the fact, asking them how they intend to vote. We can also change that up and look at it after the fact. There are trade-offs of both ways though.

SPENCER: My understanding is that you advocate for approval voting, you think that we should switch to in the US. Can you tell us about how you came to that conclusion and what the considerations were?

AARON: Sure. I actually started the Center for Election Science (which is where I'm the Executive Director), and for a long time, we actually didn't have much in the way of resources. I incorporated the organization in 2011. We got our 501(c)(3) status in 2012 and RSA; it's 2021 during the time of this interview, so a lot of time has passed since then. And our first implementation of approval voting was in 2018 and so, even then, there's a big gap there. When you don't have all the resources, you do have some opportunity to take a lot and so the organization at the time was made up of a lot of engineers, mathematicians, and political scientists. It was during that time when we really reflected on which voting method that we wanted to get behind, and personally — and can reflect a little bit more from the group perspective as well — when I learned about voting methods, the first voting method that I learned about was ‘instant-runoff voting’ or ‘ranked-choice voting.’ And intuitively, that seemed exciting; it felt like — providing all this information, ranking — it feels like it's very rich information. And I had heard of runoffs before, obviously, and it's like, “Oh, well, runoff’s good and this just makes that process easier.” And it was only after doing an obsessive amount of reading in this space that I was starting to learn some of the drawbacks of that particular voting method and, in doing so, started to gravitate towards this kind of cardinal class of systems, which is where the group was at the time — the Center for Election Science in its earlier days — looking at cardinal systems, which tend to evolve scoring, in some kind of context. And so had moved from ranked-choice voting — where you're ranking them and simulating sequential runoffs — to another ranking method called Condorcet methods, which is where you took rankings, and you're able to simulate pairwise comparisons so you can see which candidate would win if they did head-to-head elections with each other candidate, and then went from that to score voting or range voting — where you score each candidate on a scale of 0 to 5 or some max number — and went from that to approval voting. And that transition over time, was the result of thinking about the factors of what made a voting method good. So looking at winner selection, looking at how much candidate support was able to be accurately reflected and seeing that that really came through with the cardinal methods — particularly approval voting, and range or score voting. Between those two, the difference wasn't very large; given that the difference wasn’t very large, it made sense to go with the simpler one, which is ultimately why we decided on approval voting.

[promo]

SPENCER: So let's jump into some of the issues with these kinds of ranking approaches. You mentioned instant-runoff voting and you mentioned the Condorcet approach where there's head-to-head matchups. Why did you turn away from those?

AARON: Both of them have issues with the information component. On the surface, it seems like this is very rich information but it's also important to think of some of the drawbacks to this. One is that, despite the ranking, you don't actually have a clear indication of which candidates are actually supported because you're dealing with ranking or ordinal information and that's just not something that ordinal information provides. So you're actually missing out on that element.

SPENCER: Well, I'm not sure I understand that because you could, for example, assign some reasonable point system — say, your first choice gets five points, your second choice, four points, or whatever — and get some kind of sense. It's not like you don't have any information about what people prefer.

AARON: Well, that particular method when you're assigning scores based on ranking is yet another voting method called ‘Borda Count.’ But even if we're using something like Borda Count, and we have this ranking that we're providing, and we're moving that information into scores somehow, you still don't have an inclination of which candidates are actually supported versus which ones are not. For instance, the candidate who's ranked second or third — is the candidate who's being ranked second, is that a candidate who is actually supported or do they just happen to be second on your list — and you actually don't like candidates from second on down, and you actually only like the first candidate. Or maybe another scenario where you actually really liked the first three candidates, but you don't like anyone below that. With ranking information, you have no idea where that threshold is.

SPENCER: I agree with that on an individual person. I'm just saying there's a lot of ways to summarize the results. They feel like they give more insight across the whole population of how preferred different people were.

AARON: Yeah, in the aggregate, you can see a bit more of degree there, if you use for instance, say, the Borda Count where you transfer the information from rankings to a number type system. And when you aggregate those numbers, you can see a bit more in terms of degree of difference in a way that you can't quite see the same way with rankings. In both ways, it's a little bit weird because you have to think at the same time, what is it exactly that you're really measuring here? And do these aggregations actually make sense when you put them all together? Sure, you're adding numbers together, but is there something meaningful that we can take away from this?

SPENCER: I'm a little confused about this critique, because I feel like the similar critique could be levied at approval voting, which is like, what does it mean to approve of someone? Where do you draw the line? Do you approve of someone if you're 80% happy, or if you're 50% happy? You know what I'm saying? I'm not sure that, if you can compute the statistics of what percentage approve of each candidate, can we really say what that means?

AARON: Yeah, I think you're right there. At the individual level, what the threshold is for approving of a candidate versus not approving of a candidate is going to be variable. Some people's threshold is going to be different from another person's threshold. And it’s also going to depend a little bit on the election dynamic as well. But what you can say at the end of the day is you have a clear percentage of people who you can, say, supported a candidate versus didn't support a candidate. And you can't do that in the same way with ranking information.

SPENCER: Got it. What other things caused you to steer away from the ranking based methods? Or is that the primary issue?

AARON: There's that kind of information component in terms of what that data actually means. There's also an interplay between that and practicality. For instance, if you have — a lot of local elections in particular have a lot of candidates and we see in primaries, there are a lot of candidates. It can, as a practical sense, be kind of a pain to rank all those candidates. And so, to the degree that voters just don't feel like ranking all those candidates, or the belt itself is truncated — so that you're forced to rank at most, like, three or five candidates — it limits the information that voters can use. And we know from other research — for instance, there's a Canadian mathematician, Marc Kilgour, who looked at this problem, and saw that, when data was collected under a ranking method and it was limited to the degree that respondents could provide rankings, that information could identify the wrong, what we call, a ‘Condorcet winner,’ the candidate who can beat everyone else head-to-head. That data selected the wrong winner explicitly because not enough information was able to be gathered. And you can contrast that to, say, something like approval voting, where you have a list of like 20 candidates even, it's much easier to go through that candidate list and say, “Okay, these are the candidates that I approve of, these are the ones that I don't.” It's much harder to do that and rank each candidate individually. You can even have the thought experiment for yourself. Imagine 20 movies that you've seen; imagine trying to rank those 20 movies versus going through them and seeing like, “Okay, these are the movies I really liked.”

SPENCER: Right. It seems to put a big cognitive burden on the responder to have to rank so many.

AARON: That's right. So those are two particular reasons. But the other reason is — I would say that, particularly with instant-runoff voting and ranked-choice voting, approval voting also just does a better job of winner selection. For that comparison, both score voting and approval voting do a better job in terms of selecting the winner, while also doing a much better job of gauging the support for each candidate in a way that ranked-choice voting simply does not do, and we can see that in practice as well.

SPENCER: Could you elaborate? What does “doing a better job” mean in this case?

AARON: So one way of measuring that is we can look at it compared to, say, a control measure that we've done before. And we can see in those studies that ranked-choice voting or instant-runoff voting has a much larger discrepancy between the control measure of that honest assessment of each candidate than approval voting and range voting do — and by a substantial margin, particularly as you go down the ballot to candidates who are further away from the leading spot.

SPENCER: Do you have a sense of why that is?

AARON: Yeah, I think part of it is just how the instant-runoff voting or ranked-choice voting algorithm runs. So when you're ranking a candidate, if that candidate is ranked — say there's a candidate that you like, but they're ranked third or fourth out of a series of 10 candidates — well, if they get knocked off early, and the candidate that you ranked first is, say, the one who makes it to the end. With the information that you provided about that candidate that you like, but they were in second or third, because your first candidate was never eliminated, that support for that other candidate never shows up at all — so with instant-runoff voting and ranked-choice voting, there's a lot of times when the information that you put on your ballot is just never used at all.

SPENCER: So you're saying that this is a reason why people don't express their preferences as accurately in the vote.

AARON: Actually, I'm not even saying that. What I'm saying is that, even when people do try to express their preferences accurately, the information that they provide is never even used by the voting method.

SPENCER: Oh, I see, I see. So it's sort of like, even if they express their preferences, it's just not using the information as well. It feels to me that there's a number of different variables we could be optimizing for here. And I'll throw out a few but I'd be interested to hear other ones that you think are relevant. So one is gameability and, as you said, every voting system can be gamed to some extent, but like, they differ in robustness, right? And ultimately, that's an empirical question, since it interacts with human psychology and game theory and stuff like that. Then there's a question of simplicity. It kind of sucks to have a voting method where it's really, really hard to aggregate it, where it's hard to use voting machines, or in order to aggregate it, you have to move every single vote to the same location before you can start to count (which can just be logistically complicated). There's also a simplicity issue around understandability, right? Maybe it lends more legitimacy to the system if people understand how the vote works. Because I mean, there's some wild voting systems, as I'm sure you're well aware, that are just really complicated — as a mathematician, I read about that. I'm like, “I don’t think I can explain [laughs] how this works.” They're just wildly complicated. They have cool mathematical properties, but if people don't understand what they're doing, maybe that hurts the legitimacy of the whole project. What other traits would you point to that we might optimize for in a voting system?

AARON: I think there's just some good ones. When we're thinking about this, obviously, we want a voting method that selects a good winner. I would say secondary is having a voting method that also does a really good job of capturing the support of all the candidates. Those are two really important attributes. And to the degree that voting methods perform similarly on those traits, I think you would look at this next tier, which is thinking about practicality, thinking about how clear and understandable the voting method is. Because if, say, you have a voting method that performs just a tiny or a hair of a bit better than other voting methods, but it's five times more difficult, then of course, you sacrifice that hair of utility, and go with the method that's much easier. And of course, what I would maybe describe as a Pareto efficiency with approval voting over some of the other voting methods is that, if a voting method is simpler, and it performs better than another voting method, then of course, you pick that voting method.

SPENCER: Right. There's also the issue of hung ballots, where sometimes with a voting method, it can lead to situations where you can't interpret someone's ballot. I don't know how big of an issue that is in practice.

AARON: There are spoiled ballots, for instance. With approval voting, it's about impossible to spoil your ballot; you would have to really throw it in a bunch of mud and stomp on it to really spoil an approval voting ballot. Because you could select all the candidates and, technically, you have submitted a correct ballot; it doesn't mean anything because you haven't distinguished any candidates. You could also support none of the candidates, and again, you have the same kind of issue because you haven't distinguished the candidates. Very rarely do we see anyone who does that. For example, with the ranking ballot, you can mess it up a number of different ways — you could rank the same candidates first, you could skip over a ranking. There are several different ways of messing up a ranking ballot. As a consequence, you can see that spoilage rates for ranking methods tend to be higher than a choose one method and certainly higher than an approval voting ballot.

SPENCER: So we've talked about a bunch of other challenges of designing good voting systems. I'd love to hear some of your thoughts on what ‘Arrow's theorem’ is and kind of how that plays into things.

AARON: Yeah, Kenneth Arrow had a theorem which basically said he thought that certain criteria were important. And if a voting method couldn't satisfy those criteria, then maybe it wasn't such a good voting method. And he recognized that actually, no voting method could set some of the basic criteria that he laid out — some of which were things like: If more than half of the voters supported a candidate, that candidate should win. So other issues in there, such as non-monotonicity, so ranking a candidate better shouldn't harm that candidate — and this is kind of a classical approach to looking at voting methods, which I would say, to some degree, is limiting. And also, he said this applied to all ranking methods. It didn't include cardinal methods, which score voting and approval voting are part of, but it's clear that all voting methods have some kind of issue. And we really have to start thinking a bit more about, okay, well, what does it really mean for a good voting method? But in terms of this kind of criterion based approach, which is what Arrow was looking at, there are limitations of that. One is when we're looking at a particular criterion, it's a bit subjective as to whether that criterion is more important than another criterion. It also doesn't say the degree or the frequency of that particular criterion being failed — so for instance, say a particular voting method didn't satisfy a criterion or it violated it in some way, did that cause the winner to change in a way that was negative? And if it did do that, did it cause a really, really bad winner? Or did it just kind of get someone who is just like a little bit worse than the ideal candidate. And so I think these are kind of the drawbacks of looking at it this way — although, like a lot of folks, I also have criteria that I think are a bit more important. So for instance, one criterion that I think is important is something called the favorite betrayal criterion, which is saying that a voting method should always allow the voter to support their honest favorite candidate and, believe it or not, that's actually a criterion that's pretty hard to satisfy.

SPENCER: You mean, if they're acting rationally, right? In their own self interest, they should always be able to support their favorite candidate?

AARON: So with Arrow’s theorem, it applied to honest voting, but with the favorite betrayal criterion, it's saying that even if you're trying to be tactical, you should always be incentivized to support your honest favorite candidate.

SPENCER: Does ‘to support’ mean to put it first or, like, approve of it?

AARON: Correct. Yeah, if they're your favorite candidate, then yeah, you should be able to give them your maximum support. And that's important, because if we're to get any good information out of a voter, we at least want to know who their favorite is.

SPENCER: So I imagine approval voting satisfies that. Does instant-runoff voting satisfy that criterion?

AARON: In fact, it does not. And we've seen this play out in actual elections before. The most infamous election is within Burlington, Vermont, and that was a 2009 election for their mayoral spot. And there, there were three candidates: a Progressive Party candidate, a Democrat candidate, a Republican candidate. In this instance, when the city started to use ranked-choice voting, they said, “Hey, you can always support your favorite candidate. You don't have to worry about that.” And in fact, they were incorrect. So here, a bunch of the conservative voters listened to that and they said, “Oh, well,” — and Burlington, Vermont, very liberal. This is also the city that elected Bernie Sanders as their mayor repeatedly, so much more on the liberal end. So the Conservatives being a minority here, they said, “Okay, well, we're going to rank the Republican first.” That's the candidate who's closest to our interest. And what happened as a consequence was, the Democrat candidate actually had the fewest first choice preferences. As a result of that, the Democrat candidate, under ranked-choice voting, was eliminated first. The voters' next choice preferences were dispersed, and enough of them went to the progressive candidate so the progressive candidate won. Now, if we look at the ballot data, what we could see was that the candidate who was able to win head-to-head against all the other candidates was in fact, the Democrat candidate — so we know right off that the wrong winner was selected, that the Democrat candidate was a better winner than the progressive candidate. The other thing that we know is that, had those conservative voters not ranked the Republican as first, but instead ranked the Democrat candidate as first, what would have happened was that the Republican candidate would have gotten the fewest first choice votes, those votes would have been transferred, and the Democrat candidate would have won. So the conservatives, by ranking their honest — first, they got their worst possible outcome.

SPENCER: I want to go back to Arrow’s theorem for a moment, because it does seem to me quite profound — and I completely agree with you that what we actually have to do is look empirically at what works — that's what matters, what works in the real world. But it does seem to me that Arrow's theorem tells us something important, which is that — it seems to me that the criteria Arrow comes up with are all really reasonable things to want in a voting system. And by showing that no ranking system has those traits, it feels to me like it gets the conversation started saying, “Look, we can't math our way out of this.” His theorem basically says, there's no system that's going to satisfy everything we want. So now we have to get into a debate of what we actually want. Do you agree with that interpretation?

AARON: Yeah, I think that's right. And I think there's some unfortunate takeaway that some people use with Arrow’s theorem. They look at Arrow’s theorem — and I've seen him correct people a number of times before his death, on this issue. And that is a lot of people look at it and say, “Ah, well, there's no method that's perfect. So we might as well just all go home, there's no use of us debating this any further if none of them are satisfactory.” But that is an absolutely wrong conclusion from this. So the right conclusion, which I think is what you're alluding to, is that, “Okay, well, we can't get perfection on this. But that's not to say that some voting methods aren't better than others.” And in fact, some voting methods are way, way better than others. And we need to think about that and think about how we measure how good a voting method is, and seeing how these voting methods stack up together. But at the same time, when a voting method has a shortcoming, we can't just say, “Ah, well, we've got to throw that in the garbage bin, because all voting methods will have some kind of shortcoming.” It's just in terms of when we kind of reflect on this and think about what our values are in a voting method, and what makes the voting method good, and seeing how those attributes stack up against one another. And that's really how we should be assessing which voting method makes sense in a particular situation.

SPENCER: Yeah, I feel like one of the clearest ways to see that (even though no voting system is perfect, that doesn't mean there aren't some better than others) is to…there’s this wonderful table — and I'll add a link in the show notes — showing the different properties of different voting systems, and you'll see — kind of amazing how many different voting systems there are. But what's also fascinating is that some systems are just incredibly bad, like they fail on almost every single metric you can imagine. Yet there's still a way of deciding who wins based on people's preference.

AARON: And I would maybe caution on that a little bit because it sounds like the table that you're talking about is using a criterion-based approach, which is saying like it has these particular properties and meets them, or it doesn't meet them. I kind of go back to thinking like, “Okay, well, what's really important in the voting method? Does it do a good job in winner selection? Does it do a good job in capturing candidate support? How practical is it? How understandable is it?” So that tends to be the framework when I think about this particular type of problem.

SPENCER: The other thing I want to dig into with you is, what would actually happen if we switch the system in the US? Suppose you got what you wanted? We switch to approval voting. First of all, would you want it at all levels of government?

AARON: Yeah, approval voting is something that works really well, for single-winner elections. Now, there are multi-winner forms of approval voting that are proportional, and I tend to be sympathetic towards multi-seat legislative bodies being elected with a multi-winner proportional method. Before, when we were talking about single-winner elections, approval voting is very robust, and really works on all levels of government. SPENCER: Okay, got it. So suppose your dreams come true; we get approval voting throughout all the systems of government, what do we notice will be different? What would you predict would change?

AARON: One of the most robust findings that we have is the amount of accuracy in terms of supporting different candidates. For instance, right now, you can look at an election and you see, “Oh, well, there's a Democrat, there’s a Republican over there.” And if the ballot access in that particular state isn’t so onerous as to keep other candidates off, you might see a Libertarian or a Green Party candidate or an Independent. And when you look at the election results, they're going to have virtually no support. The Libertarian Party — this is the third largest party in the US — for instance, in the 2016 election, they had 3% support and that was a good year for them. With approval voting, we know that it does a really good job of capturing levels of support for all the candidates. And so what we would expect to see is other third parties and independent candidates do much better. So we could see them getting 20, 30 plus percent approval — and in addition to that, approval voting does a much better job of hitting the median voter. And so, as a consequence, I would suspect that we're not going to see these big outliers far away from where the median is. As a consequence of that, being able to see policies that are more in the middle, more sustainable over time, versus what we have now where we see this pendulum swinging wildly back and forth, where one party is in control one moment, and another party is in control the other moment, and they're just undoing each other and you have no sustainability of policy and vision.

SPENCER: Is the idea there that, if you had a candidate that sort of sits in the middle between two other preferred candidates, a lot of people would end up approving of that person because they could grab votes on both sides? And so they could end up beating the candidate sort of like in the middle of the Democrats and the candidate in the middle of Republicans?

AARON: Yeah, that's right. And right now, say — for instance, we can think about this in primaries, or we can think about this in general elections — what can happen (and what we have seen happen in instances) is the candidate in the middle can get their votes split on either side, so their vote is being divided between them and the person on the left and them and the person on the right. And so they can get just like that sliver in the middle. And that can cause them to get the least amount of support and either just not winning at all, or not make it to the next round, whether they're using a runoff or a ranking method that simulates a runoff; whereas with approval voting, you can support the candidate in the middle. And if say, you're center-right, you can support the candidate in the middle and on the right. It’s to your advantage to do that, because you also don't want the candidate on the left to be elected. And vice versa, if you're on the center left is for the candidate on the left and in the middle, because you don't want the candidate on the right. And as a consequence, you get that consensus candidate in the middle who maximizes everyone's utility. Instead of getting — like having this pendulum effect, where you're just going back and forth — you actually get someone that is most representative of the electorate itself.

SPENCER: So what would happen in your view to the two-party system under approval voting?

AARON: Maybe right now, some of the third parties — so for instance, the Libertarian Party and Green Party — they may be too far away from the median voter, but some of their ideas aren't, that aren't getting picked up by the major parties. So I suspect that some of their ideas could gain a lot more traction than they do now. And in terms of, if they were to begin winning, or if another party yet to exist were to win, I would suspect that it would be a party or candidate as an independent, who is much more towards the middle, because that's the type of candidate that approval voting prefers, and I also maybe want to hedge a little bit and say that, when I talk about the candidate being towards the middle, I also don't mean some super boring candidate either. For instance, for the Democratic primary, when we did polling on this, we saw both Bernie Sanders and Elizabeth Warren doing quite well under pool voting for the Democratic primary — and, of course, none of us would say that they don't have strong opinions on pertinent issues. So being in the middle doesn't necessarily — does not mean that these are candidates with no opinions, or they don't take bold stances.

SPENCER: Maybe one way to think about it is there's lots of ways to be in the middle. Because being in the middle just means that you're about equally appealing to both sides. It doesn't — and there's so many different issues, right? The parties, as we know them, they take stands on certain issues. But there are lots of other issues that are not clearly in the middle of either of those two parties.

AARON: That's right. And the major parties, in a lot of instances, are silent on a number of issues that some third parties are vocal about. To the degree that they raise issues that are being completely ignored by major parties, that's going to be to their benefit, particularly issues that are very popular.

SPENCER: So let's roll this forward in time. So we get a candidate in the middle to win. What do the parties do? Because the parties are like groups of people that have affiliations with each other. What do you think — I mean, obviously, it's hard to know the future — but what would you predict would happen?

AARON: Well, I would say for rational parties, what they would do is they would shift their positions to make themselves more likely to win. And when you see their primaries, you would also suspect that the candidates within those primaries that have more extreme views — if they have novel views, they will get that support reflected to the degree that they're supported — but if they're just out there away from where the median voter is, they're just not going to get elected.

SPENCER: Interesting. So you suspect a sort of convergence towards the middle then, among the parties?

AARON: Yes, I think it would be to their advantage. And also, what would be to their advantage would be to identify issues that other candidates or parties, that they're ignoring, that are popular or could be popular, and bringing those to the table — and so being able to bring new ideas to the table that would otherwise be ignored.

SPENCER: I don't know what I think about this argument. But I've heard the argument made, that we live in a sort of comfortable duopoly where the two parties — actually, it's in their interest to have the other party exist. Of course, they want to beat the other party in any given election. But they would much rather that it be a two-party system where they're going to win half of the elections on average, than a system where lots of other random people can win that are not from their own party. Do you think there's some truth to that or not so much?

AARON: I would say there is and there's evidence of that, particularly with the way that the US addresses ballot access laws. Now when we are voting we think about vote-splitting between candidates, but the parties themselves think about that, too, and they think about a party that maybe has some overlap between them, and they see the risks there. And as a consequence of wanting to push away that risk of splitting their vote with another party or candidate and having them lose — instead of saying like, “Hey, maybe we should look at a better voting method” — the step that they take is to make sure these other candidates or parties never get on the ballot in the first place. And as a consequence, the US has some of the worst ballot access laws in the world. So there's complete evidence for what you just said, which is that there's a lot of incentive for major parties to keep that competition away. But you lose some of those arguments when you have approval voting because you don't have vote-splitting being an issue anymore. As a consequence, you don't have the same kinds of incentives to keep those third parties and independents off the ballot.

SPENCER: So what do you mean by ballot access laws? Can you elaborate on that?

AARON: Sure. In the US, when you want to run for an office, you have to file for ballot access. Normally it means one of two things: either you pay some kind of fee to get on the ballot, or you gather a bunch of signatures to get on the ballot. Now what's interesting here — and I would say highly unfair — is that if you're a major party candidate, you have a different set of rules in terms of what you need to do to get on the ballot, versus if you're an independent or a third-party candidate. And in instances where we see that discrepancy, we can see independents or third-party candidates, for instance, having to get tens of thousands of signatures to get on the ballot, whereas it's taken as a given (or a much, much lower standard) for the major party candidate to get on their ballot. And in practice, when you have to get a bunch of signatures, and you're not able to do it, it's normally not for lack of people wanting to sign to put you on the ballot; you just have to pay to get people to gather signatures for you and so it costs a lot of money. So if you don't have enough money to gather all those signatures, then you essentially just don't get on the ballot. When you look at this for local offices, it's bad. But when you look at it for the presidential office, it's even worse, because you have to do this across 50 different states with 50 different sets of rules, so it's enormously harder.

SPENCER: So if this duopoly theory is pretty accurate, does it suggest that, if we switch to approval voting, the two parties might collaborate in order to try to take down any non-party candidate?

AARON: Well, they could attempt to do that. And it would be, I would say, in some ways, a bit interesting to see them work together more. They would also be not so far apart as they are now because it would be to their incentive to find candidates who were a little bit closer to where the median voter is.

SPENCER: It is interesting to see that both parties wanted Trump to lose, but they failed to make him lose. So that's kind of fascinating.

AARON: They still have the voting method to contend with. If you have — if the major parties, say, didn't like a particular candidate, or a new party, but they were really popular with people, and they really meshed with where the median voter was — I mean, they can try hard, but they're gonna have a much harder time stopping that kind of candidate.

SPENCER: Well, it depends, I guess, on how much you think the populace is manipulated in terms of what information they have access to, and what they hear in the news and what they recommend to them on YouTube, and so on. Do you have a sense of that? What's your opinion on the extent to which people's views on candidates are effectively implanted into their minds through strategic campaigns as opposed to based on actual information about what's really going on?

AARON: Well, I think I agree with you with the media particularly playing a big role. For example, the Commission on Presidential Debates, they decide who goes and debates for the presidential elections and they decide on that based on the contract — the Democrat and Republican basically collaborate together and decide who's going to be debating with them. And surprise, surprise, they don't like company when it comes to the debate stage. The other part that's important here is they are required to have specific criterion that decides who joins and who doesn't. And the criterion that they set is that, if you want to get on the debate stage, then you have to poll at least 15% across five national polls, to be able to get on the debate stage. And when you have to choose one method that we use now, which is the same way that they poll, it's about impossible to do that. And one of the reasons is that, if you don't have a lot of support, or no one has heard about you, then good luck getting even more support beyond that. And there's also, of course, the issue with the choose one method of nobody thinks you're going to win, then they don't support you. And so you have all these factors that go into that. Whereas with approval voting, even when you're polling on approval voting, it's much easier to get that support level to be more accurate. And if you have a lot of support, it's much easier to have that reflected. And as a consequence, when you have more support, whether in polling or through actual election results, it's a whole lot harder to marginalize you in the same way. Like when you get 5%, that's a whole different ball game in terms of being able to push you aside, compared to what it is when you're getting like 20%, 30%, 40%; it's much harder to marginalize you then.

SPENCER: One of the critiques that sometimes is thrown at democracy is that democracy makes sense insofar as people are voting for things that are in their own interests. So I'm wondering, do you think that most people do vote for things that are in their own interests?

AARON: I suspect so. I don't know that I have the research in front of me that would indicate one way or the other. But I would have a strong suspicion that they are attempting to vote in their own interests, although not always successfully — particularly when they have a tool that's not very good, that even when they try to vote in their interests, the tool forces them to vote against their interests.

SPENCER: I'm in favor of switching to approval voting if we can, but I will say I have a little bit of hesitance around it. And I think that hesitance has to do with a kind of Chesterton’s fence argument — which is, for those that haven't heard that phrase, the idea is, imagine you're walking along and you see a fence, and you can't notice any particular reason for the fence to be there. You look around; it seems totally pointless. So you think, “Oh, this fence is just in the way, let me tear it down.” And the Chesterton's fence argument says, “Well, until you understand why the fence was built, maybe don't tear it down.” So I worry about some kind of unexpected change in society that, you know, we have some weird equilibrium that's developed over a really long period that we're throwing out of whack. And while my expected value prediction is, it actually would make things better (because I think the result would reflect people's interests more, and I don't think people's voting is perfectly correlated with their own interests but I think it does capture, to some extent, their own interests), I do kind of worry about the unknowns of throwing away an equilibrium that we've had for a long time.

AARON: Well, we do the same thing for as long as we don't. There are lots of ways that we've carried on doing things that have been pretty terrible, until we stopped doing it that way. I don't know that this is particularly different in that respect. And there are some advantages in the way that this is going about — so for instance, at the Center for Election Science, we're working with groups and campaigns across the country, and we're seeing this play out one at a time. We're seeing it play out in Fargo, North Dakota. We're seeing it play out in St. Louis, Missouri. We just launched a campaign in Seattle, Washington; we've got a bunch of really bright and awesome folks in Seattle who are running a campaign now. And we're going to also in the future — hopefully not too distant future — start looking at states as well. And so we're gonna see this play out over time. So if, say, something catastrophic looks like it's happening that would make it worse than the status quo, we're going to have plenty of time to see that as well. I suspect that that will never happen but if that's something that's kind of unforeseeable, we will have the opportunity to see that well ahead of time.

SPENCER: That's good to hear. Because I think that's exactly what to do, you tear down little fences, [laughs] and monitor to make sure there wasn't some beast lurking there that you didn't realize, before we tear down the big fence, so I like that approach a lot.

[promo]

SPENCER: Okay, so can you tell us a bit more about how do we get to this world where we use a better voting system? You talked about doing local levels. Is the plan to try to get enough local elections that it kind of gets more normalized? What's the strategy long-term?

AARON: This should not come as a surprise to anyone, but when you ask people in office to change the way that they get elected, they're not very excited about that proposition. Our conclusion is that we just don't ask them because they have a conflict of interest that's pretty clear.

SPENCER: You can get the losers excited; they just want to have as much power. [laughs]

AARON: Yeah. So the first thing we do is we work with the voters themselves, the people themselves in the city. We ask them to reflect on the elections. Oftentimes we don't have to ask them; they are already reflecting on their elections and are pretty upset about them. As a consequence, they know their community, and they reach out to other key stakeholders in their community, and they share this potential solution. We help to provide them the tools to be able to provide the outcome that they want to see, and we run that through ballot initiatives. So they gather the signatures to get on the ballot; they work with the community, make sure that everyone understands the benefit of approval voting and the issues that their city or state currently faces, and we do it through a ballot initiative. And so far, in Fargo, North Dakota, the ballot initiative passed by 63 and a half percent. In St. Louis, Missouri, it passed by 68%. Early polling in Seattle already puts it around 70%. When people learn about this, and we go through and we work with them on education campaigns, and they talk to other key people in their community, it turns out to be pretty popular. So we've seen this as being pretty much a road to helping those in those communities be able to see the outcome that they want, using approval voting, and ballot initiatives as the tool to get them there.

SPENCER: And so what's the scale-up strategy? How do you go from there to, you know, let's say the national level? Obviously, it's very ambitious.

AARON: Well, we're a very ambitious organization. To kind of give a scale of our ambitiousness, and the degree that we move ridiculously quickly, we didn't get — so I mentioned when we started, which is 2011, we got our (c)(3) status in 2012. And we didn't even get funding until the very tail end of 2017. At the tail end of 2017, nowhere in the country used approval voting. Within a year of our initial funding — which was from Open Philanthropy Project or Open Philanthropy now — we were able to hire our initial staff, and to get approval voting implemented in its first US city. Within a year of funding, we took something that didn't exist, and we made it exist. And that was in a city of about 125,000 people, which is the largest city in North Dakota, which is Fargo. And two years later, we worked with the folks in St. Louis, Missouri — St. Louis has a population of about 300,000 people — and we got that to pass. And it's going up on about two years later yet again, and now we're working with the folks in Seattle, which is around 700,000 people. And if you're taking notes on your end charting this out, you're seeing that we've gotten up in population by about two and a half times with each campaign. So we're moving tremendously fast, having started from zero. So we're quite capable of being able to move forward on this. The only thing that has been a limiting factor has been resources. Without that, we can move very quickly. We have very smart people on our team. That's really our only limiting factor. We've gone from going from nothing to scaling up. So when looking at this, we look at this as showing proof of concept, replication, and scaling. We're doing that right now with the city component. The next stage is doing that with the statewide component, and so you do a lot of things that are quite similar — which is you're working with key groups at the state level, and you're working with them to pass statewide ballot initiatives. And the component of this that scales well is that, in the US, the states are responsible for deciding who their federal representatives are for their US House seats and for the Senate seats. And states also determine how their electoral votes are allocated for presidential elections. So when you win a state through a ballot initiative, you also affect all these federal components as well. And you can do that across multiple states to be able to affect multiple federal positions and electoral votes.

SPENCER: So you're saying that a single state could adopt approval voting even for the presidential election, even if no one else adopts it?

AARON: That's right. Yes.

SPENCER: That's really cool and helps so much. Okay, so before we wrap up, I wanted to do a quick rapid-fire round with you with a few faster questions. How's that sound?

AARON: Sounds great. Yeah.

SPENCER: All right. So if you could snap your fingers and put instant-runoff voting in place, but you couldn't get approval voting in place, would you do that? Would you still think it's a big enough improvement that it would be worth taking?

AARON: It's a little challenging because it does displace other opportunities. So I would be a little hesitant to the degree that it would displace opportunities from other better voting methods.

SPENCER: But if you — let's say, you know, our current system is 0 on a scale of 0 to 100, and proof of voting is 100, how much better is approval voting than instant-runoff? Would you put it at 50 or would you put it at 70?

AARON: I would put maybe instant-runoff voting — if plurality voting is 0, and let's say approval voting is 100 — I would put instant-runoff voting around 60 or so, 50 to 60 or so, keeping in mind that it's really just kind of a complicated runoff system that doesn't do a very good job of capturing support for the candidates.

SPENCER: Okay, cool. So I know that there's an org that is trying to promote instant-runoff voting, and obviously, you have a point of contention with them. Why do you think that you disagree? What is sort of the nature of your disagreement around why you're advocating for approval, and they're advocating for instant-runoff, and they haven't been persuaded by your arguments?

AARON: Part of that is a little challenging, and we do face those challenges directly. From the announcement of the Seattle campaign, there's an organization — FairVote Washington — that in a paper there said that they would be disappointed if approval voting was implemented in Seattle, which is, of course, discouraging to us to hear that.

SPENCER: It seems like two groups that are a lot aligned are tearing each other down — not ideal.

AARON: Yeah, yeah. And for fairness, like when there's another campaign for another voting method, we are vocal about what we think about different voting methods. And we study the results for those but we also don't go into a particular campaign and say, “Hey, don't vote for ranked-choice voting,” if they're doing a ranked-choice voting campaign. We'll have opinions about differences between the voting methods, and we'll turn around and collect data after the fact to see how those elections go. And if there are bad claims being made about particular voting methods, we’ll correct them. But it is a little bit challenging — I think part of it is a little bit of them looking at it and saying, “Hey, we've been doing this all this while and we've been getting all these funders. If we're wrong at this point, then maybe it's hard for us to turn back or to face our funders and say, ‘hey, we made a mistake.’” Whereas we came from, I would say, a more privileged position, which was we didn't have any of those factors pushing down on us. So when we were looking at this problem, we could just say, like, “Hey, we think this is the right one to go with. It satisfies both these technical factors of getting good winners, capturing support, and it's not needlessly burdensome.” And so I feel like we have a little bit of advantage where we're coming from (as a second mover) to be able to say, “Hey, we can go forward with this. And we don't have to worry about any of these other constraints that other ranked-choice voting rights, or runoff voting organizations have.”

SPENCER: Alright, final question for you — it's a little more far afield. What do you think of online voting?

AARON: This is, I’ll admit, a little bit outside of my technical expertise; my technical expertise is in social science and law, and looking at voting methods for the past over 10 years. And so online voting — when I look to other experts, one of the main concerns that they have within the technology's scope is issues like ‘man in the middle’ attacks, which can be challenging and being able to address those. But this is also an area where I don't have the same level of expertise so it's harder for me to be able to analyze this problem in the same kind of technical way that I am for other voting method problems.

SPENCER: Right, right. You just pointed out the idea that there are serious security concerns that would just have to be dealt with in an online voting system.

AARON: Yeah. And I would look to folks like Matt Blaze as folks who are a bit more informed on this topic.

SPENCER: So if people want to support you, how should they do that?

AARON: If you want to support what we're doing, you can go to ‘electionscience.org’, you can sign up for our newsletter. If you are excited about bringing approval voting to your own city or state, you can go to ‘take action’ on our website, and you can join our Discord, you can join a local chapter. You can also support us financially by giving a tax-deductible donation through our ‘give now’ button. If you have more complicated assets that you want to give, such as through stocks or cryptocurrency, you can give that through a donor-advised fund. I'm also a licensed attorney, and I write tons of essays on tactical aspects of giving. So if you reach out to our team, we can also help you with whatever kind of tax-efficient way that you want to give — a lot of resources on our end to be able to help you, however you want to become involved.

SPENCER: Aaron, thanks very much for coming on. So super interesting.

AARON: Awesome. Yeah. Thanks so much, Spencer.

[Outro]

JOSH: What are three books that have impacted your life?

SPENCER: The one that comes immediately to mind is ‘The Selfish Gene’ by Richard Dawkins — which is not about genes for selfishness; it's a very confusing title. It's about a way of looking at the world through an evolutionary lens where genes are in competition with each other. And I highly recommend the book. I think it really helped me understand evolution at a much deeper level. And understanding evolution is important if you want to understand the world because we are evolved creatures. We evolved through evolution and so understanding that process is really useful. And I think a lot of people coming out of high school think like they kind of know what evolution is but they often actually have misunderstandings about it. And that book will help clarify many of those misconceptions. A second book I would point to is the book ‘Feeling Good’ by David Burns, which is a book on cognitive behavioral therapy. That one’s focused on depression; he also has one for anxiety as well. He's just a wonderful practitioner of cognitive behavioral therapy, who also is a great teacher of it. And I thought that was really useful. It kind of was the first thing that ever taught me about CBT. And it's a great set of techniques for trying to help yourself be happier when you're in difficult situations, or using evidence-based methods to reduce depression and anxiety. It inspired some of the work that I do; for example, our work on UpLift and Mind Ease is partially inspired by that. A third book that I would point to is one that I have never heard anyone talk about. It's called ‘The World Until Yesterday’, and it's a book about cultures around the world that are pretty disconnected from large-scale culture, so these are kind of tribal groups. And it's looking at, what do these groups have in common with each other? And in what ways are these groups different from each other? And I thought it was just a fascinating exploration of the variety of human culture and how different humans can be and, also in some ways, that humans can be similar even though they're not in contact with each other. So I thought that was really fascinating and gives a different lens on psychology than I've seen before.

Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of the major podcast platforms:


Credits

Host / Director
Spencer Greenberg

Producer
Josh Castle

Audio Engineer
Ryan Kessler

Factotum
Uri Bram

Transcriptionist
Janaisa Baril

Music
Lee Rosevere
Josh Woodward
Broke for Free
zapsplat.com
wowamusic
Quiet Music for Tiny Robots

Affiliates
Please note that Clearer Thinking , Mind Ease , and UpLift are all affiliated with this podcast.