with Spencer Greenberg

**
Enjoying the episode? Want to listen later? Subscribe on any of these apps
or stores to be notified when we release new episodes:
**

May 2, 2024

What exactly *is* quantum computing? How much should we worry about the possibility that quantum computing will break existing cryptography tools? When will a quantum computer with enough horsepower to crack RSA likely appear? On what kinds of tasks will quantum computers likely perform better than classical computers? How legitimate are companies that are currently selling quantum computing solutions? How can scientists help to fight misinformation and misunderstandings about quantum computing? To what extent should the state of the art be exaggerated with the aim of getting people excited about the possibilities the technology might afford and encouraging them to invest in research or begin a career in the field? Is now a good time to go into the field (especially compared to other similar options, like going into the booming AI field)?

Scott Aaronson is Schlumberger Chair of Computer Science at the University of Texas at Austin and founding director of its Quantum Information Center, currently on leave at OpenAI to work on theoretical foundations of AI safety. He received his bachelor's from Cornell University and his PhD from UC Berkeley. Before coming to UT Austin, he spent nine years as a professor in Electrical Engineering and Computer Science at MIT. Aaronson's research in theoretical computer science has focused mainly on the capabilities and limits of quantum computers. His first book, *Quantum Computing Since Democritus*, was published in 2013 by Cambridge University Press. He received the National Science Foundation's Alan T. Waterman Award, the United States PECASE Award, the Tomassoni-Chisesi Prize in Physics, and the ACM Prize in Computing; and he is a Fellow of the ACM and the AAAS. Find out more about him at scottaaronson.blog.

**JOSH:** Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Scott Aaronson about the power, uses, and impact of quantum computing.

**SPENCER:** Scott, welcome.

**SCOTT:** Thanks. It's great to be here.

**SPENCER:** We hear so much about quantum computing in the news, and it's so hard to separate the hype from what's really, really cool about this new technology and this new scientific enterprise. I'd like to start there. Tell us first, what is quantum computing, very basically. And then I have a bunch of questions to try to cut through the hype and figure out what's real.

**SCOTT:** Sure. Quantum computing is a new way of harnessing nature to do computation. You want to exploit quantum mechanics, which has been the basic operating system for physics for about 100 years now. And the key thing that quantum mechanics says about the world is, first of all, that the world is probabilistic at the very deepest level. We can't predict with certainty when a radioactive atom is going to decay, for example, or whether we'll find some photons to be horizontally or vertically polarized or whatever; we can only calculate probabilities. But that's not even the half of it. The really shocking part is what we have to do to calculate these probabilities. It is completely different from any way that we would calculate probabilities in day-to-day life. Normally, you'd say a probability is just a number from zero to one. So it's real, and it's non-negative. And if I want to know the total probability for a thing to happen, like a particle to hit a certain spot on a screen, then I have to add up probabilities of all the different ways that that thing could have happened. Maybe the particle got there by going left first, maybe it got there by going right, but each path has some probability and I add them all up, and that gives me a total. Quantum mechanics says, at the fundamental level, no, that is not how the world works. It actually works in terms of these more fundamental numbers that are called amplitudes. Amplitudes can be positive or negative. In fact, they can even be complex numbers involving the square root of minus one. You'd never talk about a negative 30% chance of rain; that would be nonsense. And you still don't in quantum mechanics, actually. At the end of the day, we get probabilities which are numbers from zero to one, but we have to calculate them by adding up these amplitudes. Amplitudes can be positive or negative. They can point in different directions. They can cancel each other out. So for example, let's say I shoot a photon at a screen with two small slits in it, and then I look at where that photon ends up on a second screen behind it. This is the famous two-slit experiment. Richard Feynman used to say that all of quantum mechanics is contained in this experiment. He was one of the founders of quantum computing. So now the key thing that we see when we do this is that we see this diffraction pattern, where there are certain spots where the photon just never wants to appear. They're just dark patches where the photon basically never shows up. And yet, the crazy part is that, if I close off one of the slits, then the photon can appear in those places. So to say that again, by decreasing the number of possible paths that the photon can take to reach some spot, I can increase the chance that it gets to that spot. This is not just some detail of physics; this is going all the way to the laws of probability themselves, what you might have thought was not even the subject matter of physics. You might have thought that this is just pure math, but quantum mechanics says no. The way that you calculate probabilities has to change. And the way that we explain this, starting a century ago, is that there is some total amplitude for a photon to get to some spot. And you calculate that by adding up an amplitude for every path that the photon could have taken to get there. If one path has a positive amplitude, and another path has a negative amplitude, then those two contributions can cancel each other out. They can interfere destructively, as we say, so that the total amplitude could be zero, and now that event will never happen at all. Now, a very central rule in quantum mechanics is called the Born Rule, from Max Born, and it says that, when you make a measurement — when you look at your system that was previously isolated, your particles or whatever — let them make an imprint on the wider world, then you force the system to decide which outcome it's going to give you. It makes that decision randomly, and whichever decision it makes, it sort of collapses. So measurement is famously this destructive process in quantum mechanics. By making a measurement, you are changing the system; you're forcing it to decide which outcome it wants to show you. And the way that it makes that choice is that the probability of each outcome is equal to the square of the absolute value of the amplitude of that outcome. This is how these complex numbers become the probabilities, the real numbers between zero and one that describe our actual experience. But before we make the measurement, when no one was looking, things are working in this very different way in terms of these complex numbers — one for every possible configuration that the system could be in — and these complex numbers can interfere with each other. And we know that they had to be there because that's the only way to explain the probabilities that we see when we do look, like how, for example, closing off paths for light to go could increase the chance that light gets to some destination. So that's my two-minute summary of quantum mechanics. Everything else that you've ever heard about it, about entanglement or quantum tunneling, they're all just different applications of that one change to the rules of probability. Since I'm a computer scientist — I'm not a physicist — I can just let you in on the secret. And now a quantum computer is just a computer that would try to exploit these different laws of probability that nature obeys at the microscopic scale. You might ask, what is exploitable here? An ordinary computer has some state which is made of something like bits. I have a bunch of memory elements that can each store either a zero or a one. And if I had 1000 of these bits, then that's two to the 1000th power different states that my computer could be in. But we could say, really, it's only in one of them, and if I looked, I would just see which one it's in. Now, the basic building block of a quantum computer is called a qubit, and a qubit is just a bit that can have some amplitude to be zero and some amplitude to be one. Or as we also say, it can be in a superposition of the zero state and the one state. An example might be an electron that could be in a superposition of its ground state and its first excited state, two different energies that it could have as it orbits the nucleus. Or we could have an atomic nucleus that has some spin state; it can be spinning clockwise, spinning counterclockwise, about some axis, or it could be in a superposition of those two spin configurations. We can have a photon that has a superposition of two different polarizations. Any of these physical systems could represent a qubit, just like there are many, many different physical systems from transistor voltages to abacus beads that can represent a classical bit. But now, here's the key point. If I have two qubits, the rules of quantum mechanics are unequivocal that I can't just specify amplitudes for the first qubit and for the second qubit separately from each other. Rather, I need an amplitude that, if I measure, both qubits would be found in the zero state, so that's zero-zero. And then I need an amplitude that the first qubit would be zero and the second would be one. I need an amplitude that the first qubit would be one and the second would be zero. And I need an amplitude that both of the qubits would be found to be one. So that's four amplitudes. And I could have, for example, a superposition of the zero-zero state and the one-one state, and what that would be telling me would be that, if I measured one of the qubits and I found that it was in the one state, then immediately the other qubit would also collapse to the one state, and likewise for the zero state. This is the famous phenomenon of entanglement. But so far, this is actually very closely analogous to what classical probabilities do. I can have two events that are correlated, right? I look at one sock, I see that it's red. And then I know that, if they're a matching pair, then the other sock is also red, and immediately, I learn that, even if the other sock happens to be on the other side of the world. So with amplitudes also, with quantum mechanics, I can have these amplitudes for all possible components of my system together, and this gives rise to what we call entanglement. But now the key is, if I have three qubits, now I need eight amplitudes because there's eight possible strings of three bits; each one can get its own amplitude. And if I have 1000 qubits, then now that's two to the 1000th power (2^{1000}) amplitudes, one for every 1000-bit string. And that's actually much, much more than the number of subatomic particles in the whole observable universe, which is only about 10 to the 80th, or something. So in some sense, for a century, quantum mechanics has been telling us that, just to keep track of the state of (let's say) 1000 measly particles, if they can interact and become entangled with each other, nature, off to the side somewhere, has to maintain some scratch paper with two to the 1000th parameters in it, or even more than that, so more parameters than you could write down in the whole observable universe. And anytime something happens to those particles, nature has to cross off all of those numbers and replace them with new numbers, with a new list of amplitudes. That is a staggering amount of computational effort for nature to be going to. And in a sense, chemists and physicists have known about this for generations. They've known it mostly as a practical problem. If you're trying to simulate the quantum mechanics of lots of particles to understand chemical reactions or materials or nuclear physics, things like that, you very, very quickly run into intractable problems, where some of the largest supercomputers in the world — at least that are for academic use — are being used for this, for trying to solve the Schrödinger equation, which is the basic equation that tells you how these lists of amplitudes change over time. The chemists and physicists have developed all sorts of tricks and shortcuts for sometimes getting the answer that they want, but they don't work in full generality. It was 43 years ago that a few physicists — most famously, as I mentioned, Richard Feynman — had the remarkable idea that if nature is giving us this computational lemon, why don't we make lemonade out of it? Why don't we build a computer that itself would be made of qubits that could interact with each other, that could form these entangled states, and that would take advantage of having this superposition of exponentially many states, and that might, for that reason, be exponentially hard to simulate using a conventional computer, because a conventional computer, as far as anyone knows, could calculate all the same things that the quantum computer could, but in order to do that, it would have to calculate these amplitudes by summing up an exponential number of different contributions, which could easily, for 1000 qubits, take longer than the age of the whole universe. So now, of course, Feynman and David Deutsch — who also had these ideas — and the others who were talking about this in the early 1980s, they immediately faced the question, which is: supposing that you could build this device — what they called a quantum computer, a machine with qubits where you could program any sequence of interactions that you wanted among a small numbers of the qubits to transform the amplitudes however you like — supposing that you built that, what would it be good for? And at the time, Feynman and the others were really only able to give one answer to that question, which is, it would be good for simulating quantum mechanics itself. So that was the original application of a quantum computer. And I think 40-some years later, that is still the most compelling application that we know, and that is not the message that often gets out to the public when quantum computing is popularized. But I think that that's the truth of it, that what a quantum computer is really born to do, designed to do, is to solve problems that are inherently about quantum mechanics because we can take the dynamics of one quantum system with an exponential number of amplitudes and so forth, and we can map it to the dynamics of a different quantum system, namely, this single programmable, scalable quantum computer that we hope will eventually get built. But that is not what put quantum computing on the map for most of the intellectual world. There was (like) a decade when it remained this bizarre idea that only a few oddballs were even aware of. What really put it on the map for people was a sequence of dramatic discoveries in the mid-1990s which showed, well, a bunch of things. But one of them was that a quantum computer could sometimes solve a purely classical problem — something that's got nothing to do with quantum mechanics — much faster than we know how to solve it with a classical computer. And the famous example, still today, 30 years later, is called Shor's algorithm. This was discovered by Peter Shor in 1994, and what he found is that, if you built a scalable, programmable quantum computer with thousands or millions of qubits, then you could use it to efficiently find the prime factors of a huge composite number. For example, a 2000-digit integer you get by multiplying 1000-digit primes, you could find those prime factors. And what I mean by 'efficiently' is using a number of steps that scales roughly with the square of the number of digits. Now, by comparison, the best classical algorithms that we know for factoring, they all take an amount of time that involves some exponential. You can do better than exponential in the number of digits. The best that we know is exponential in the cube root of the number of digits. So Shor's algorithm represented an exponential speed-up, compared to the best-known classical algorithms for this problem. Now, why does anyone care? Well, as your listeners might know, for better or worse, almost the entire Internet is protected by cryptographic codes that depend on the belief that factoring, and a few closely related problems in group theory and number theory, are hard. This is the miracle of public key encryption, of encryption systems like RSA and Diffie-Hellman, and elliptic curve cryptography, the things that provide the backbone of SSL. And whenever we visit any website with HTTPS, our web browser is using these systems, and they let us exchange secure messages including our credit card numbers, without having to agree in advance on a secret key, without having to meet someone from Amazon in a dark alley at four a.m. But they all depend on the belief that factoring and some related problems and number theory are hard. And what Shor showed is that, if you could build a scalable quantum computer, then that's no longer true.

[promo]

**SPENCER:** I've heard a lot of people express great concern about quantum computers being able to break public key encryption. Is this a genuine concern we should have?

**SCOTT:** Yes. Unlike many of the quantum algorithms that people talk about today, Shor's algorithm is clearly a real exponential speed-up over the best-known classical algorithms for the same task. And I think everything that we've learned in the last 30 years points toward the view that the problems in making it work seem like "merely' technological problems, with giant quotation marks around the "merely." But there is an enormous engineering challenge to build a scalable quantum computer. There's been a huge amount of progress even just in the 25 years that I've been in this field, and once that progress reaches a certain point, you pass what is called the fault-tolerance threshold, which is the point where you can manipulate one or two qubits accurately enough that quantum error correction then starts to work; it starts to give you a net gain compared to not using error correction. And once you've reached that point, it's like passing the critical mass for a nuclear reaction. Then you ought to be able to scale to as many qubits as you want on as many operations as you want, in order to run things like Shor's algorithm and break public key cryptography.

**SPENCER:** And do we have a sense of when this is going to happen? Is it going to be a year from now? Twenty years from now?

**SCOTT:** As you can imagine, I get that question all the time. If I could predict how many years things would take, I wouldn't be a professor; I'd be an investor, and I'd be rich.

**SPENCER:** Can we not extrapolate from the number of qubits?

**SCOTT:** The number of qubits is not the relevant thing to look at, because what really, really matters is, well, how good are these qubits? By which I mean, how long are they going to maintain their quantum state? And people learned years ago that if you just want to tell the world that you have thousands of qubits, you can do that by building qubits that are kind of garbage, kind of not that good. And this gets into how the hucksters started taking over this field. But I think the relevant number, the number that I would look at is, how accurately can you apply a two-qubit gate? With what fidelity can I just do an entangling operation between two qubits? Now, what the theory says is that, if I can do that with about 99.99% accuracy, then known quantum error correction method should work, meaning that, at that point, I could scale to as many qubits as I wanted, do as many operations with them as I wanted. It would be just like classical computing; it's just a question of: How much do I want to spend? How much integration do I want to do? And we're not at that point yet, but we can extrapolate and try to see how close we are. When I started in this field in the late 90s, it would be amazing if you could do a two-qubit gate with 50% fidelity. That would be a Nature paper. And then at some point, that 50% became 90%. And then when Google announced its quantum supremacy demonstrations, which were five years ago, they demonstrated that they had a system with (like) 53 qubits, and they can do about 1000 operations on those qubits. And each two-qubit operation had about 99.5% fidelity. Now, within the last year, if you look at the best results from trapped ions and from neutral atoms — these are different architectures than the ones that Google used, which is called superconducting qubits — you look at these trapped ion or neutral atom results from companies like Quantinuum or QuEra. In the last year, they've reported 99.9% fidelities for two-qubit gates. So it looks like we are about one "9" away from 'fault-tolerant should scalably work.' So if you just plot those error rates on a log scale as a function of time, then that makes you feel pretty optimistic about it. That makes you feel like, if that continues, then within the next decade, fault-tolerant quantum computing ought to be possible. Of course, there could be unexpected roadblocks along the way. Maybe it'll be too expensive and people will give up. Any number of things can happen. But looking at the gate fidelities as a function of time, and looking at what would be needed to do fault-tolerance, that does make you feel like it's a matter of years really — maybe a decade or something like that — until we could see some useful quantum fault tolerance. Then after that, there's a big engineering problem, of course, to scale it up to the scale where you could actually run Shor's algorithm to break RSA. Just to put some numbers on that, to break the encryption that currently protects the Internet, you're going to need (like) thousands of logical qubits. If you use these error correction methods — which is the only way we know to build a quantum computer at scale — then you're going to need probably thousands of physical qubits for every logical qubit. So now you're talking about millions of physical qubits. And now, with any current technologies, you're starting to imagine a whole building full of dilution refrigerators, or something like that. It starts to look like some of the first classical computers like ENIAC, things like that. It starts to look like a gigantic thing that you would need to build if you wanted to break RSA. If someone has $100 billion, I certainly cannot say with any confidence that they could not do that within the decade.

**SPENCER:** Presumably, this would begin with governments building it. For example, the US government might have a vested interest in cracking this.

**SCOTT:** This is the thing. You can see how the field has evolved in time. In the aftermath of Shor's algorithm being discovered in the 90s, of course some of the main people interested in quantum computing were the intelligence agencies, NSA and so forth. And they remain interested in it. But eventually people came to the obvious conclusion that breaking RSA is kind of dramatic, but it's also not a sustainable way of making money for any. [laughs] And even for the intelligence agencies, or for criminal syndicates, or for whoever wants to read other people's email, access their credit card numbers, whatever, even for them, this is all a temporary advantage because, once it becomes known that quantum computers are practical, that they can break these cryptographic systems, then the world will react by migrating to different forms of cryptography. And that is already starting to happen. There is already a big push to move to what is called post-quantum cryptography or quantum-resistant cryptography. So it's important for people to understand that for 20 years or so, we have known just classical public key cryptosystems that, as far as anyone can tell, seem to resist attack, even by quantum computers.

**SPENCER:** So is it like a steamroller is coming, but we can see it from ten miles away, and there's just not really a danger?

**SCOTT:** That's right. Yeah, you can see it from ten miles away. Now, unfortunately, the world moves so slowly [laughs] that it still might not get out of the way in time. To upgrade the entire architecture of the Internet turns out to be a vast undertaking when you're in a world where people still have Windows installations that had known insecurities 20 years ago that were never patched. Practical security just makes you want to weep with how bad things are. But in principle, we know what to do. We know these public key cryptosystems that are typically based on problems involving high dimensional lattices. And these seem very, very hard, even for quantum computers. None of this is proven. It's not even proven that any of these cryptographic codes are hard to break with classical computers. These are profound unsolved problems. But we have some reasonable confidence in these quantum-resistant methods of encryption; it's just that these are not the methods that we currently use. But the hope is that, with enough effort, we can upgrade the whole world, and then in some sense, with security, we'll all just be back where we started. And so then you could say, well, then, from a civilizational perspective, what was the point of going to all this effort to build the quantum computers if they just broke these cryptosystems, and they forced us to use different cryptosystems? I think if you're trying to make a positive case for why the world should have quantum computers, then you have a much better bet talking about quantum simulation — which was Feynman's original application — that could have enormous potential uses for designing new drugs, designing new materials, solar cells, maybe high-temperature superconductors. None of these things is a slam dunk. Even once you have the quantum computer, you still have to provide value. You have to be all the best tricks and heuristics that anyone can think of with their classical computers. And you have to do this for a problem that actually matters in practice. You still have a combinatorial search among many, many different chemical reactions or drugs, and a quantum computer doesn't necessarily help you all that much with that part. Where it helps the most is, given some material or some chemical reaction, if I want to know what is its behavior — like does this material superconduct or how fast is this chemical reaction — I want a single programmable computer that can solve this problem quickly without having to synthesize the chemicals in my lab for every specific case. This is (I think) the main economic promise of a quantum computer. And then there's the case that has taken over a lot of the discourse about quantum computing in the last 20 years, which is neither of those things; it's not quantum simulation and it's not breaking cryptography. But at some point, the idea became lodged in people's heads that a quantum computer is just going to massively accelerate just about everything, so optimization: machine learning, financial portfolio management, scheduling flights, vehicle routing, all these things. And then the case for quantum computing increasingly came to rely on all of those things. And that, I regard as incredibly unfortunate because the truth of the matter is that the quantum algorithms that we actually know that might help with those types of tasks, seem to provide only a very modest benefit.

**SPENCER:** Yeah, let's talk about the optimization algorithms.

**SCOTT:** Absolutely.

[promo]

**SPENCER:** Because some of the very earliest (quote unquote) 'quantum computers,' my understanding is that all they could do was this kind of optimization where you're trying to find the maximum of a function or minimum of a function. Is the idea that they're just not that much better than classical solutions?

**SCOTT:** With the devices that currently exist, we don't see evidence that they are beating a classical computer at all in a fair comparison, for any optimization problem. This is when the massive public confusion about what quantum computers are good for started to really enter the field. And this was fueled by some of the companies themselves, even the scientists who realized that, "Well, this is what people want to hear, and so we should just put the most optimistic spin imaginable on the prospects of quantum computers in this area," regardless of what is true. Now, what we learned in the 90s is that there is a quantum algorithm that can be used for optimization and for combinatorial search that has extremely, extremely broad applicability on Shor's algorithm, and this is called Grover's algorithm. And basically, what Grover's algorithm lets you do is, it lets you take a set of (let's say) N possible solutions to some problem, and search through that set to find a solution that's good, that matches your criteria, in only about the square root of N steps. So it's a quadratic speed-up over a classical brute force search, where you just evaluate all the solutions one by one. It turns out that maybe half or more than half of what's in an undergraduate CS textbook or an algorithms textbook has some component that can be 'Groverized,' meaning that can be sped up using Grover's algorithm. In so many algorithms, there is some component of just a for-loop, where you just try different possibilities and look for one that works. And Grover is this very generic speed-up for these types of things. But the drawback is that, unlike Shor's algorithm, this is not an exponential speed-up; this is a quadratic or a square root speed-up. And you might say, well, a square root speed-up still sounds pretty good. The square root of 10 to the 20th is 10 to the 10th. That's still a lot but it's a lot less. The trouble is that Grover's algorithm, like Shor's algorithm, is going to require fault-tolerant quantum computers, ones with this full error correction mechanism, and that incurs an enormous overhead. Optimistically, let's say a factor of a million with currently known error correcting codes, compared to using a classical computer. So if we let N be the number of possible solutions to your problem, now the question is how large does N have to be before a million times square root of N is less than it? In that case, the answer is, N has to be a trillion. So people have done these detailed cost estimates starting around 2017, and what they found, indeed, is that Grover's algorithm only starts to provide a net win over classical GPU clusters (let's say) when you get to really, really monstrously enormous combinatorial search problems or optimization problems involving many trillions or quadrillions of possible solutions. So eventually, we hope that there will be some benefit from Grover. Maybe that benefit will even come sooner if we figure out more efficient ways of doing error correction, which is a major focus of quantum computing research. But because that speed-up, even in the best case, is modest, what people started doing about 20 years ago is, they said, "Well, forget about Grover's algorithm." In fact, we don't need any kind of provable speed-up over a classical computer because we use all sorts of classical algorithms like simulated annealing or deep learning, where we have no proof that they work or how fast they work, but we just try them in practice, and often enough, they do work. So why shouldn't it be the same with quantum computing? And that started a trend of people just proposing quantum algorithms that were heuristic, so the quantum adiabatic algorithm, quantum annealing, this thing called QAOA — quantum approximate optimization algorithm — these are all in this family of algorithms. And people got very excited about these things, because they could not rule out the possibility that, maybe for some optimization problems, these algorithms will give you an exponential speed-up over what you could have done with a classical computer. But the key point I want to make is that, 20 years on, this remains about the best that we can say. We cannot rule out the possibility that there are huge speed-ups to be had by these algorithms that could be much more than from Grover's algorithm. But we don't really have a positive case that there is a speed-up from them that is more than the Grover speed-up and that would really, really matter in practice. We can construct very artificial exponential speed-ups using some of these algorithms. Even that was a very difficult recent achievement, and that's great. But in order to do that, you have to construct some really, really bizarre fitness landscapes, and it's not clear whether anything like those landscapes should be expected to occur in practice. We might not know the answer for sure until we have the quantum computers to experiment with at a large scale. And so now, this not knowing, this ignorance has created this immense opening for unscrupulous people to say, "Well, let's just make the most optimistic assumption imaginable, and let's just run with that." Or let's just say, "Look, we are using a quantum computer to route vehicles. We're using a quantum computer to recognize handwriting, to do machine learning." And people learn that, if you say these things, then investors and the press and funding agencies will just eat this up with mustard. They will just give you hundreds of millions of dollars. And the most basic question that a scientist would want to ask: "Okay, but could you have also done this just as well with a classical computer?" People won't even reach the point of asking that question. [laughs] That is kind of astounding, right? But maybe if people have seen what happened in cryptocurrency, or other things that had this bubble phenomenon, maybe it will make more sense to them. So I see my job as to just continue to tell the truth about it as best as I can, regardless of what people want to be true. And I think the truth is, yeah, there might be quantum speed-ups for machine learning or for optimizations that are better than the Grover speed-up, and that matter in practice, but that remains an open question.

**SPENCER:** Regarding Grover's algorithm, you mentioned there might be this factor of optimistically a million times slower that these run. Could we expect that theory could improve that factor of a million plus, or is that really just something that's fundamental?

**SCOTT:** Absolutely, it might be improvable. You could say a theoretical computer scientist — or at least a caricature of one — would say, "Well, that million is merely a constant factor. So I'm not concerned with it anymore. I understand this gambling behavior; the problem is solved." Of course, people do care about the constant if the constant is a million. And there actually is a lot of effort that is being put toward designing better error correcting codes, better methods for doing fault tolerance, and there have been some breakthroughs within the past year, including from a group at IBM, that may have reduced some of the overheads by an order of magnitude or so. But a lot of these things have trade-offs. So we say you can cope with a higher error rate in your two-qubit gates, if you have a much larger number of ancilla qubits, for example. So there's a trade-off between those things. Or you can get a much better error correcting code with a much lower overhead if you, instead of only coupling qubits that are next to each other just physically — neighbors on your chip or whatever — you could couple an arbitrary pair of qubits. And now you need a technology that's able to pick the qubits up and move them around or send signals between arbitrary pairs of qubits. So that might make some of these architectures — like trapped ions or neutral atoms — look better compared to superconducting qubits. The superconducting qubits are really just fixed in place on the chip. The ions, you can pick them up with a laser; you can move them around but moving them around takes time. Partly, these are low-level architecture questions, but they're very, very important ones. And I think some people's real hope is that we'll discover just a fundamentally better way of doing error correction that will just not have these huge overheads. Microsoft has placed a huge bet on an approach called topological quantum computing. This idea was invented around 2000 by people like Michael Freedman and Alexei Kitaev. It's a remarkable idea for building a quantum computer that would have some natural error correction built in to just the physics of how it operates, so that you wouldn't need so much costly overhead to do error correction. The one catch is that you would need to create a new state of matter which has never been seen in nature.

**SPENCER:** Sounds like a big catch. [laughs]

**SCOTT:** Yeah, you'd need to create something called topological matter that has these excitations, these quasi particles in it, that would behave neither as fermions nor as bosons, which are the two basic types of particle in quantum mechanics, at least in our three-dimensional universe. But if you can find particles to two dimensions that it turns out that there are additional kinds of particles that are possible — these are called anyons — what Kitaev and the others proved 20-some years ago is that, if you could engineer these non-abelian anyons, as they're called, then just by braiding them around each other in an appropriate pattern, you could do a universal quantum computation, like Shor's factoring algorithm, for example. So just moving the particles past each other would have the effect of applying two-qubit gates. And furthermore, if you wanted to cause an error, then you would actually have to change the topology of how the particles were braided around each other, some small adjustment to the path of a particle. If it doesn't change the topology, then it makes no difference. That's why this is called topological quantum computing. It's a very, very striking idea, and this is the closest thing that we have to what you're asking for, to this fundamentally better way of doing error correction. But so far, it's not even clear if people have succeeded at making even one topological qubit. Microsoft and groups in the Netherlands that Microsoft funds have been trying very hard to do that and they've made some progress on it.

**SPENCER:** What do you think's going on in the minds of people who run these numerous quantum computing companies that are offering all these existing (quote, unquote) 'solutions'? Do you think that they know that their machines don't work to solve real problems and they just are hopeful that one day they'll figure it out, or do you think something else is happening?

**SCOTT:** There is a huge spectrum among quantum computing companies. I should make that clear. There are some that are very, very serious and that are actually... You can talk to them and they will tell you — maybe they'll focus a little bit more on the positive than I would — but they'll tell you things that are entirely consistent with what I am saying, or with what any academic would say and could say. When l get hired to evaluate these companies, the highest recommendation that I can give is, "I hope that this company succeeds." And I'm very happy if someone else invests money in them, a venture capitalist with a high tolerance for risk. If they invest in them and let a thousand flowers bloom and let people try these ambitious things, hopefully at least one of them will succeed. So there are quantum computing startups that I'm a fan of, and that I talk to and work with. I even try to help if I can, and I hope that they succeed. Of course, where I draw my line is, are people saying things about what quantum computers are good for, or what they're good for in the near future, that I know are false, and that these company founders ought to know is false also. And yet, they just go and say the things anyway, like that quantum computers promise some huge advantage for recognizing handwriting or recognizing faces or vehicle routing or things like that. So I think it's complicated. You could ask the same question like, 'What was going through the mind of Elizabeth Holmes when she was running Theranos? To what extent did she really believe it? Had she really deluded herself? To what extent did she know that she was lying?" Or maybe she considered the lies justified because eventually she was going to make it true just by sheer force of will. That seems to be the belief. You tell the funder whatever they need to hear in order to get the funding so that you can stay in business, and then eventually, you will make it all true. I think that's my best guess for what is going through their minds. But it's also — and I mentioned this before — there's this latching on to uncertainty, like, "Okay, well, no one can prove that this QAOA or quantum annealing or whatever, are not going to give you an exponential speed-up for some optimization problems that matter in practice, so why not just assume they will?" There's also some — and this is often very explicit — there's an ideology of, you build it, and then surely, someone will figure out the application. How well would the builders of ENIAC have been able to foresee TikTok and Candy Crush and all the things that we waste our time with today with computers? And so surely, there must be dozens of more applications of quantum computers that just no one has even thought of yet because they don't have the quantum computers to play with. And I actually think there is something to that. Surely there are more applications that we haven't thought of yet. And surely being able to experiment with quantum computers will help in figuring out what they are good for. But you still have this enormous constraint that I would say is the main thing that makes quantum algorithms difficult, which you didn't have with classical algorithms. And this is that, in order to be useful, you have to be the best that can be done with a classical algorithm. Classical computers already exist. They are the technological marvels of our whole civilization. They work unbelievably, ungodly well, and the quantum computer has to beat, not just the naïve classical algorithm, but the best classical algorithm. That is an extremely high bar to clear. And you could say we didn't have a right to demand of the universe that that bar should ever be clear for any classical problem. It was an amazing discovery that Peter Shor made that there are a few problems — even if just breaking cryptographic codes — where you can clear that bar. We had no right to demand that the universe give us others. Hopefully, there will be some others. But fundamentally, a quantum computer is just a more specialized thing than many people would like it to be.

**SPENCER:** To your knowledge, has anything ever been done with a quantum computer that was useful, that couldn't have been done as well or better with a classical computer?

**SCOTT:** Not unless we count as useful, things like characterize the abilities of a quantum computer for this task in order to learn stuff that is useful toward building the future of quantum computers. If we just have a purely classical problem, classical input, classical output, I would say that we are just now at the stage where, within the next few years, this bar might very well be cleared. Do an interesting simulation of a quantum system, and then get out some classical answer that is interesting to us, or at least interesting to the relevant community of chemists or physicists, and that they really wanted to know and that they were not able to get with a classical computer. I think that, even without full error correction, that bar of useful quantum simulation that might be clearable within the next couple of years even, I don't think that it's been cleared yet, to my knowledge.

[promo]

**SPENCER:** Sometimes people think that quantum computing is much more useful than it is because they hear explanations like, "Well, a quantum computer, instead of just searching for each item one by one, it can try all combinations at once." And if you think of it that way, that sounds incredible. So what's wrong with that reasoning?

**SCOTT:** [laughs] Tell me about it, right? Perfect question. It's true that with a quantum computer, you can create an equal superposition over all the possible answers to your problem, even if there are exponentially many of them. That is even an easy thing to do with a quantum computer. So that part is true. The trouble is that, for a computer to be useful, well, at some point, you need to observe it; at some point, you need to measure, you need to look, you need to get an output. And if I just took an equal superposition over all the answers, and I just measured it to pick an answer, not having done anything else, then that Born Rule that I mentioned a while ago — the rules of quantum mechanics are unequivocal — that all I'm going to see will be a uniformly random answer. And of course, if I just wanted a random answer, well, then I didn't need a quantum computer; I could have just flipped a coin a bunch of times. I could have saved billions of dollars. So the only hope of getting an advantage from a quantum computer is to exploit the way that these amplitudes being complex numbers work differently from the probabilities that we're used to. And that's why I harped on that so much before. With every algorithm for a quantum computer, what you're trying to do is choreograph a pattern of interference so that, for each wrong answer — each answer you don't want to see — some of the contributions to its amplitude are positive (let's say) and some are negative, so they cancel each other out. Whereas, for the right answer — the answer you do want to see — you want all the contributions to its amplitude to be reinforcing each other, to be pointing in the same direction, so that they add up constructively. If you can arrange that, then when you measure, you're going to see the right answer with a high probability. But the tricky part is, you have to do this — boost the probability of the right answer — via this delicate interference pattern, even though you yourself don't know in advance which answer is the right one. Because if you already knew, what would be the point? And you also have to do this faster than even the cleverest classical algorithm could have zeroed in on that right answer. Once you put it that way, then it becomes easy to see that nature has given us this really bizarre hammer. It's something that's weirder than any science fiction writer would have had the imagination to come up with. And we have to now go look around and say, "Well, are there any nails in classical computer science that this hammer is adapted to hit?" And it was a very, very non-trivial discovery when Shor showed that, for the specific problem of factoring numbers, this ability is exactly something that is helpful to you. But it's not just a simple matter of you trying all the different possible factors in parallel. That doesn't work; if that did work, then you wouldn't have needed Peter Shor to think of it. Anyone could have thought of that, right? What do you do if you want to factor numbers with a quantum computer — well, it takes me about three lectures to explain it in my undergraduate class — but you exploit something that Shor invented, and which we now call the quantum Fourier transform. It is an operation that acts on this gigantic list of amplitudes and that has the effect of doing the Fourier transform, which is something that can take a periodic sequence and reveal with what period is this sequence of numbers repeating. It turns out that, if you can do that kind of thing, then for reasons of classical number theory, you can then find prime factors and also calculate discrete logarithms, and do lots of other interesting things. But you have to do this Fourier transform of this exponentially long list of amplitudes via a sequence of gates, of one- and two-qubit gates — which is not exponential, which is polynomial — but which has the effect of Fourier transforming this exponentially large vector and causing destructive interference on outcomes that don't tell you about the period of your periodic function, and constructive interference on the outcomes that do tell you about the period. That is very, very amazing. But it's also very special to these problems in number theory, in group theory, and that is why Shor's algorithm has not been much more broadly applicable than just the breaking cryptography.

**SPENCER:** We've talked about companies playing a role and spreading misleading information about quantum computers. But you've also seen this from some physicists. For example, I just want to give a quote from Michio Kaku. He says, "Let's look at a classical computer calculating how a mouse navigates a maze. It is painful. One by one, it has to map every single left turn, right turn, left turn, right turn, before it finds the goal. Now a quantum computer scans all possible routes simultaneously. This is amazing. How many turns are there? Hundreds of possible turns, right? Quantum computers do it all at once." So I'm wondering, besides companies misleading people about the nature of these things, what else is going on here that leaves people so confused?

**SCOTT:** This is the kind of thing that just makes me bang my head against the desk. And this is how my blog (I think) first became widely read. Almost 20 years ago, I realized that there was this giant gap, that everyone in the research community knew that this was BS, but somehow the message was not getting out to journalists, to the public, to policymakers. So I might as well do that.

**SPENCER:** Michio, he's a physicist, right? I mean, he's not just a random person.

**SCOTT:** I reviewed his book on my blog. At one point in the past, he was a real string theorist, so he knows quantum mechanics. He clearly knows better. But at some point, he became mostly known for going on TV and talking about UFOs — UFOs being plausible, extraterrestrial visitors and things of that kind — and just writing books about how technology will change everything. I enjoyed his books about string theory in the 90s. But then I noticed his books just gradually got worse and worse, less and less analysis of anything, and more and more just pure hype and boosterism. Maybe that was what sold or that was what he got rewarded for. And then, a year or two ago, he wrote possibly the worst book, the single worst book, about quantum computing that I've ever seen, which is an achievement, in a way. Because there's been a lot, an unbelievable amount of bad writing and false claims about quantum computing, and even there, this stands out. And you can look at the acknowledgments of the book, and you can see he sort of boasts about all the famous physicists whom he's known. And he lists not a single quantum computing expert whom he talked to in the course of writing a book about quantum computing, not one. And so he indeed makes exactly the mistakes that you would make if you had only read popular articles about quantum computing and if you had never talked to a single person who knows the field, such as saying that it just tries all the possible answers in parallel. I'm sure that his book has sold more than anything that I've written. So maybe I'm the foolish one.

**SPENCER:** What do you think the role of scientists should be in terms of fighting misinformation? I mean, you've been pushing this line for a long time.

**SCOTT:** Look, I am trying to just tell the truth as best I can. I'm not a saint. I don't respond to everything that comes out because I would rather spend my time doing my own research or coming up with something original; that's more interesting to me. But I do feel a sort of duty, or maybe I feel like, if I'm already doing this, then maybe I should keep doing it so that my colleagues don't have to. At least someone ought to be filling this role. But everyone has to decide for themselves, what are their ethical lines? For me, I will do consulting. For example, if some VC firm asks me to look at this quantum computing startup, talk to them, report back, I will accept a supplemental income to do things like that, or to give talks to companies about quantum computing. But what I will not do is accept money where I feel like this money is contingent on my shading the truth in a certain way, on my giving a very optimistic vision of what this quantum algorithm or this quantum computing architecture will be capable of, that I might not believe. And I think there are orders of magnitude more money available, if people are willing to cross that line, but I'm not.

**SPENCER:** One thing I wonder about: in social science, we've seen a situation where sometimes — not most social scientists — but some social scientists will kind of exaggerate the effectiveness of the techniques, or they'll exaggerate the claims to some extent, which seems to boost their own careers, gets them big articles about them, and so on. And other social scientists will roll their eyes or say this is unethical, but at the end of the day, it actually makes the whole field look better because it looks like social science is doing this super cool stuff, and people are reading about it in the media. And I'm wondering, is there a similar situation with physicists, where many of them are like, "This is ridiculous, this is over the top," but in some ways, the whole field benefits because it gets people excited about these techniques and maybe moves money into it. And so it's sort of, "Well, do you really want to fight against it that hard if it's benefiting your whole discipline?"

**SCOTT:** Of course, this argument often gets made to me explicitly, "Isn't this a team effort? Do you want a scalable quantum computer to be built or not? Well, if you want it to be built, then people have to get excited about it, about a narrative or a vision. And so maybe we shouldn't outright lie, but we have to sell the vision to them in a way that's going to be maximally appealing." Fine, I understand that, if I were running a quantum computing startup, then that would be my job, in a sense; that would be my duty to my employees. So I feel fortunate that I'm not in that position where I have to compromise my best assessment of the truth. And I feel like, well, it seems also valuable to have someone who was just blogging the truth as best as he sees it. If you are leading people to believe that quantum computers are going to massively accelerate classical machine learning, for example, and even if you're not explicitly saying anything false — you're just leading them to have that impression — and then later, they discover that that impression was wrong, then they're gonna get very angry. And they're gonna say, "Well, why didn't anyone tell me this?" At least I don't have that problem. At least, I can point to 20 years of blog posts where I'm like, "Yeah, I've been telling you this as loud as I possibly could."

**SPENCER:** Besides your own personal contributions to this, what do you think other scientists should be doing or not doing to help prevent this kind of misinformation?

**SCOTT:** Well, everyone has to decide for themselves what mix of things they want to do. A lot of my colleagues would say that they're grateful for what I've done or what I'm doing, but they're not as inclined to popular writing or to controversies, to arguing with people on the Internet, and they want to concentrate more on research, and that's totally fine to have people like that. But I would encourage people — especially when people have rare types of expertise — I think that there's a lot of value in them coming out and saying stuff that might be obvious to them and their colleagues, but is not obvious to people in the wider world, and that actually matters to them. And I think it's especially important when you remember that there are people who are just totally opposed to quantum computing research. They think that quantum computing is just completely a fraud and a scam, and it would never work. And the truth is that, if that were actually true — if quantum computing were impossible for some fundamental reason — that would be much more interesting than if it is possible, because that itself would be a revolution in physics. And the burden would shift to that person to explain why a quantum computer is not possible, explain what you are going to change in our current understanding of physics to make it not be possible, or to make all quantum systems be efficiently simulatable with a classical computer. So there really is something scientifically fascinating here. That is why I got into this field. One way or the other, we want to find out: does nature have this amazing computational ability under the hood? Even if it's only for a few problems, like factoring or quantum simulations, if you can show empirically that you get exponential speed-ups for those problems, you have shown this amazing fact about reality itself, that it actually had this power, even if it's very hard to get at because of the difficulty of choreographing interference patterns. So to me, all the applications are just like icing on the cake. It could have had zero useful applications and I still would have wanted to pursue this. If there are applications for quantum simulation, or maybe eventually from Grover's algorithm, well, then so much the better. And I think that's the honest case. That's the case that brought me into this field. That's the case that I would still make today. But that is not the case that has driven the large investments that we've seen over the past decade. So I feel like I want to be out there, making the case that I'm happy with. Of course, if people are doing things like spending billions of dollars to build scalable quantum computers, and I like what they're doing, but I feel like they're doing the right things for the wrong reasons, well, then, I'm of two minds about that. [laughs] Maybe I feel about it the way that Carl Sagan felt about the Apollo mission where he knew that, from a scientific standpoint, it is a massive waste of money to send humans to the moon, and anything that humans can do, robots can do cheaper and better. But on the other hand, if it's going to happen anyway, then he wants to get as much science as possible out of it. I feel like the reasons why so many companies are now trying to build quantum computers are not the reasons that I would endorse. But on the other hand, I would endorse the goal of building a scalable quantum computer, of seeing whether that can be done at all. Given that, it seems like my response ought to be to talk to, interact with, all the people who were involved in these efforts, but at the same time, just very clearly tell everyone what I believe is true. If people want to spend a lot of money in ways that I wouldn't, well, then, as long as it's not my money, that's fine. And the one place where I draw the line is when they say things that aren't true, and then I try to challenge that and correct it.

**SPENCER:** Before we wrap up, my last question for you is, it might seem from this discussion that we've been pretty cynical about quantum computing, that it's really overblown in all these ways. But I have the sense that it is actually an incredibly exciting time, legitimately, for quantum computing, and that we're going to see really amazing things in the next 20 years happening with it. What's your feeling? Is now actually a good time to go into this field? What would you say?

**SCOTT:** Well, that depends on the individual and what they're interested in. But I would say, if you're really, really interested in (let's say) the fundamentals of computer science and of physics, and maybe also math and engineering and philosophy, quantum computing is kind of amazing in the way that it sits at the intersection of all of those things. It's like one package where you get physics, math, engineering, philosophy, computer science, and where these central concerns of so many different disciplines come together. And so if that's what you're excited about, then, look, I put my money where my mouth is; I am training new quantum computing researchers every day. That's my day job. I would say if your interest in this field is conditional on quantum computing delivering a huge speed-up for handwriting recognition, or for oil and gas exploration, or for some specific classical application area, where the best that we can do is run one of these heuristic algorithms and cross our fingers, well, then, I would say that that is not a very good case; you may be setting yourself up for disappointment, and maybe just going into classical AI, classical machine learning, would be a better choice for you, because that is something that really will revolutionize just about everything, or just about everything that human intelligence is applied to. Quantum computing, from the very beginning, is this more specific thing. But if you have this fundamental curiosity about what are the ultimate limits of computation allowed by the laws of physics, then there is something irresistible about it. I absolutely would say that and I would also say that this is an exciting time because we are just now starting to actually see some of the basic building blocks of quantum error correction. QuEra, a company in Boston, did an experiment just this year, where it was not full quantum fault tolerance, but for some special tasks, they reported a gain by using a quantum error correcting code to do a computation compared to not using it, with 48 encoded qubits and several hundred physical qubits. We can now do that sort of thing. At least for very, very contrived sampling tasks, we can challenge the best thing that can be done with any existing classical computer. That's what quantum supremacy is about. That is also new within the last few years. For the first time, we can now apply thousands or tens of thousands of operations to hundreds of qubits and actually get a reasonable result. When I entered this field 25 years ago, they said it would be amazing for just two qubits in isolation to talk to each other with any non-trivial fidelity at all. And now, the fidelities are approaching or exceeding 99.9%. They're getting close to where this actually ought to be scalable. So there are a lot of exciting things about it right now, as long as you go in with a clear-eyed view about what this is and isn't going to be good for.

**SPENCER:** Scott, thanks so much for coming on.

**SCOTT:** Absolutely.

[outro]

**JOSH:** A listener asks: How is Clearer Thinking funded?

**SPENCER:** Our funding comes from some grants. So we've had some generous grants from different people interested in our mission. Also, I've just self-funded some of our operations. So it's really a mix of those two.

**Staff**

- Spencer Greenberg — Host / Director
- Josh Castle — Producer
- Ryan Kessler — Audio Engineer
- Uri Bram — Factotum
- WeAmplify — Transcriptionists
- Alexandria D. — Research and Special Projects Assistant

**Music**

**Affiliates**

Click here to return to the list of all episodes.

Sign up to receive one helpful idea and one brand-new podcast episode each week!

Subscribe via RSS or through one of these platforms:

Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: