CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 223: Physical limits and the long-term future (with Anders Sandberg)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

August 15, 2024

How much energy is needed for GDP growth? Would our civilization have developed at the same rate without fossil fuels? Could we potentially do the same things we're currently doing but with significantly less energy? How different would the world look if we'd developed nuclear energy much earlier? Why can't anything go faster than light? Will the heat death of the universe really be "the end" for everything? How can difficult concepts be communicated in simple ways that nevertheless avoid being misleading or confusing? Is energy conservation an unbreakable law? How likely is it that advanced alien civilizations exist? What are S-risks? Can global civilizations be virtuous? What is panspermia? How can we make better backups of our knowledge and culture?

Anders Sandberg is a researcher at the Institute for Futures Studies in Sweden. He was formerly senior research fellow at the Future of Humanity Institute at University of Oxford. His research deals with emerging technologies, the ethics of human enhancement, global and existential risks, and very long-range futures. Follow him on Twitter / X at @anderssandberg, find him via his various links here.

SPENCER: Anders, welcome.

ANDERS: Thank you. This is going to be so fun.

SPENCER: Oh, yeah, I'm really excited for it. I feel like, not only are you a fantastic intellectual, but you bring such energy to the topics that it's sort of infectious. I love that about your work.

ANDERS: Yeah, maybe we should do some experiment and find the most boring topic possible and see if I can bring energy to it.

SPENCER: Well, I've got a great starting topic for you: global energy and GDP. That sounds pretty boring, so let's start there. Tell me, what have you been thinking about regarding global energy and GDP lately?

ANDERS: I think the interesting question is: how much energy do you need to get GDP growth? And is it that, in order to get good economic growth — we might have to return to what the heck that is — you actually need a lot of energy? Or could it be that you can totally decouple things and run it on essentially no energy, but we all get richer, and the environment is doing fine?

SPENCER: Yeah, I've heard some people argue that, without fossil fuels, society might have advanced far less quickly. Do you think that's true?

ANDERS: I don't know. The reason I'm thinking about this is that a colleague at the Institute for Future Studies, Karim Jebari, kind of dragged me into working on a paper with him about this. There have been these arguments and people have said that, "Okay, look at the graph of GDP per capita, and energy per capita. It's been a perfectly straight line since the Second World War up until the oil crisis in 1971, '72, and then it turns roughly horizontal because, yeah, we still kept on getting wealthier, but we didn't use more energy." And to me, that sounds great. Wow, we're going green here. But some people say, "Wait a minute. Factor total productivity, how productive we are; also at the same time, start slacking off. We're not becoming as productive as we used to be. We're still getting richer, but not at the rate we used to."

SPENCER: Do you think that energy and productivity tend to be coupled around physical things, like if you're moving dirt, things like that? Whereas, if you're talking about productivity in information processing, like your ability to achieve goals as an office worker, maybe they're less related.

ANDERS: It feels like they should be. It's very obvious that we need energy to do work; that's even how we defined energy before we got into advanced physics in the 20th century. And it seems like moving ideas around on a spreadsheet, that shouldn't necessarily take much energy. Big ideas and small ideas on the computer probably are still worth the same number of watts of power. But the problem is, of course, that maybe there are more subtle effects, and that is something that seems very worth understanding.

SPENCER: So what's your current model of how energy is related to GDP growth and productivity more broadly?

ANDERS: My model is that we have a physical part of society, and that is still very important. It might not necessarily be the largest fraction of the economy but still, getting raw material, turning raw material into useful goods, distributing those goods, moving people around, recycling stuff, all of that takes energy. And it's also based very much on basic physics. We understand fairly well what's going on there. There are some very cool links here to thermodynamics. It turns out that recycling and mining, both of them are sorting operations where you want to sort out the atoms of a certain kind from some ore — and this goes whether it's iron ore or salt water that you're trying to desalinate — and then thermodynamics forces you to pay a certain energy for it. But the problem is, most of our economy consists of services. We're talking to each other. We're writing pieces of paper saying that somebody owns something else. We're investigating pieces of paper and trying to figure out, does that guy actually own this and how much tax should he be paying for it? These things, it's much less obvious that they need energy.

SPENCER: So if you had to parcel things out now, if we had to just keep our energy level usage much lower than it is now, do you think that would essentially destroy the modern economy? Or do you think that we would actually be fine because we could do a lot of the things we're doing with much lower energy usage levels?

ANDERS: I'm rather optimistic about this. I think you can totally do decoupling and run most of what our civilization is doing way more efficiently. It's relatively rare that we're close to these thermodynamic limits. It happens in a few special cases, but our computers are tremendously inefficient. We're very far from the Landauer limit that tells you how much energy you need to feed in, in order to erase bits of information. And much of our services could probably be done way more efficiently, both in terms of energy, in terms of transport, but also probably just in terms of organization. But that's the point where I'm getting a bit worried, because reforming services and getting them to work better seems to be super complicated. It should be obvious that, if you can improve something, a company would want to do it, and would make a lot of money from that. But we see a lot of inefficiencies, so maybe the market is not optimizing for that. And even worse, when something becomes very cheap energy-wise, we tend to not save on the energy, but actually use more of it. When we switched from incandescent light bulbs to LEDs, a lot of people started leaving the lights on because it didn't cost them that much.

SPENCER: Is there some fundamental reason why, when energy becomes cheaper, that we're not going to think about efficiency and saving so much? Is it basically just because there's diminishing marginal return to us in doing so?

ANDERS: I think there is something like that. It's called Jevons Paradox in economics: when something becomes cheaper, it's not that you spend less on it, but usually that you buy much more of it. And sometimes that works out beautifully. Our computers are kind of a virtuous version of this paradox. We got cheaper compute so we figured out new, wonderful uses of that computing. And while, as an old programmer, I'm still feeling a bit bad about programs being in the megabyte and even gigabyte range when I know that they could be more compact, still, it works really, really well. But that light bulb example means that now we might be producing a lot of wasted light, which is bad from light pollution and also aesthetically, and for allowing us to sleep. And I'm a bit worried that we get more waste. But I think there is something deeper. Could it be that we actually need a lot of energy to run the economy? There are some people arguing that that shift to green energy and the shift away from nuclear in the '70s, that might have been where you get the great divergence in productivity. We should have been having our flying cars by now, but instead, we ended up with a lot of bullshit jobs.

SPENCER: Could you explain that a bit more? You're saying that, if we had used other forms of energy — like nuclear — we would have had a different trajectory of our society?

ANDERS: John Storrs Hall, in his book Where Is My Flying Car? has been arguing that we shifted to other energy sources. We started optimizing things for saving energy instead of trying to extract energy in a better way. Partially, that was because, back in the '70s, it didn't look like you could get renewable energy very cheaply. So that was an interesting project, but it was mostly for special applications. It was either using fossil fuels better or nuclear was getting into political trouble. The end result was, people focused on making things more energy-efficient, and that might have meant that they were less efficient. Now I don't know enough economics to tell whether this is true. This is something I want to poke into so you might have to re-invite me in a while when I've figured it out. But it's not obvious to me why this should affect the rest of the economy. The efficiency of people working in marketing doesn't seem to be that strongly dependent on how many watts of power you send into the office. I have a hard time imagining that, with ten times as much energy going into a marketing office, that it will produce ten times better ads. That seems weird to me, so I'm a bit skeptical about that. But there are many industrial processes where a lot of energy probably should make things much better. Conversely, I think going really green can be a very smart thing but, quite often, you have a trade-off. You can make things using less energy at lower temperature very efficiently but, quite often, that means that you have to give up speed. Biology demonstrates the wonders you can do when you have almost reversible nanomachines, except that it takes a fair bit of time for a tree to grow up and produce an apple. If I want an apple now, I have a problem. I can't just plant some seeds and wait for it.

SPENCER: Yeah, I've always been struck at how slow it is for wounds to heal. You might think that, if you got a big gash on your arm that, for survival purposes, it'd be very advantageous for that to heal quickly, so you don't get a lot of bacteria and so you can get back to full functioning soon. But it could sometimes take weeks to get reasonable amounts of healing. Do you think that could be linked to this energy trade-off where, in order to heal quickly, it would have had to use far more energy?

ANDERS: I think that's right. I've seen some papers where biomathematics people are trying to calculate how much energy you can put in. There is also probably a control problem. You don't want cells to divide too rapidly. In principle, our cells could perhaps divide almost as fast as bacteria, which would mean that you get a doubling every hour, or something like that. But that's probably just inviting trouble in the form of cancer or other things. You might want to be a little bit slower in building extra material, but these trade-offs are everywhere in our body. Our immune system and our brain are kind of doing a tug of war about who gets the most resources. And then, of course, a gonad says, "Wait a minute, we totally need to reproduce and I'm sitting here in a big controller for actual inclusive fitness, so you guys better give me my energy budget." Meanwhile, the rest of the body is kind of grumbling about it.

SPENCER: Is there an actual competition among body parts, or is that just speaking in an analogy? Because you might think that they all share the same genes, so aren't they all incentivized through evolution to cooperate?

ANDERS: I think just because you're, on average, incentivized to cooperate doesn't mean that you necessarily do it in all situations. It's a bit like priority-setting. Any organization that needs to set priorities might have a mission that everybody is in on. But when you need to prioritize something over something else, there's going to be grumbling. Even if everybody realizes it's rational, it's still not very nice to try to do your work with smaller resources. In the body, it seems like the brain does have a little bit of privileges of snapping up the blood sugar, but the immune system regulated by various hormones — including stress hormones — how much of a share it gets varies and, generally, it seems like, when you're under long-term stress, then the immune system gets a smaller share — which might be rather bad for it — while short-term stress gives it a bigger share. So you have these trade-offs going on and, quite often, they're also behavioral and learned. It might be that you recognize that, "Okay, now I'm going to go hungry for a while," and you adapt to that. I'm not super expert on this, but I think there is a lot of interesting stuff we can learn from it. And similarly, the embryo, as it's developing inside the mother, it also has this interesting competitive situation. For its survival, it's great if it gets all the nutrients, but it's worse for the mother's survival. So there is a kind of negotiation in the placenta: how much energy and nutrients go to the baby? If too much goes to the baby, it might become an orphan. On the other hand, the biology of the mother and evolution that has set this is optimized for making sure that babies also get born. But there seem to be various interesting tensions here.

SPENCER: And in that case, there actually is evolutionary competition, because the mother and child only share about 50% of their genes that tend to differ between people. And so you actually could have genes in the baby that are being manifested that are not in the mother, that actually are in competition with the mother's genes.

ANDERS: Yeah. This gets clearer when you have a slight difference in genomes, but you could make an argument that this is going on between the different parts of us, especially the reproductive system. Germ line cells, in some sense, might have a slightly different agenda from the other cells that are going to get discarded with the previous person when, eventually, the offspring takes over the world.

SPENCER: Going back to the question of energy, one argument that I've heard — I'm curious to get your opinion on — is that, because energy is not very portable — it's hard to move it around efficiently — it matters a great deal what the energy production of each country is (let's say) per capita, and that this actually might be a major driver of how well different civilizations do in terms of growth. Do you think that that's true today?

ANDERS: It might depend on what you're making. It used to be that we're in this Malthusian trap; energy was mostly in the form of nutrients that we got from plants and animals, got converted into muscle power that was used, of course, to grow more plants. As we got more energy, you got a larger population. So the whole situation was really a question: how much area of land that generates nutrients can keep a population going? And that determines how big the army is and how many brains can solve problems, so it was all fairly tightly connected. But today, the energy is entering the system in a very different way — as fossil fuels or hydroelectric dams or nuclear power — and doesn't get converted into that much people. We could use our abundant energy during the Industrial Revolution to vastly increase the population. But that only happened to a degree. There is, of course, this ongoing debate right now about falling fertility rates and whether that is going to lead to a demographic disaster. But it's also interesting to note that we just found that it's more useful to turn that energy into aluminum rather than people. The question is: is your economy built on aluminum or creating food — this energy-intensive material — or is it finicky stuff like software and microchips? They certainly require some energy but I bet the TSMC, their facilities in Taiwan, they're not drawing as much power as an ordinary steelworks would do.

SPENCER: So it could be that, if you fix the technology... Say, for certain technologies it was true that you needed to have very high energy production per person (or something like this), in order to advance, in order to grow. But then, as you get into other technological eras where maybe the thing you're producing is less energy-intensive, then maybe that link breaks down.

ANDERS: Yeah, that is my suspicion. However, I'm not certain about this at all. I'm curious to find out actually what's going on here, because this is also a link between the world of physics, the energy part, and the world of civilization, the economics part. And I want to find out more of these links. Earlier, I mentioned the Landauer limit. That's my favorite one because it links computation — or at least erasing bits using irreversible computations — with energy use. And that allows me to make all sorts of very interesting claims about the long-run future, because I can say something about the possible energy flows over there.

SPENCER: Could you explain that limit?

ANDERS: If you have a register in a computer and it contains either a zero or a one, and now you want to erase it so it becomes a one, it's obviously a practical electronic or mechanical operation. But at this point, thermodynamics shows up and says, "Wait a minute, you have one unit of uncertainty there, one bit of entropy. And according to my second law here, entropy will always stay the same or increase. You better do something here to keep that law working." So what happens is, I need to expand some work, and I get some wasted. You could say that Maxwell's demon shows up and holds out his hand for a little tip. And what is happening is, of course, I'm moving that uncertainty somewhere else. If I just happen to have a big computer with an empty computer memory, I could just swap that bit for a bit that I know is zero from that register. That wouldn't cost me any energy if I do it in a clever way. But in practice, computer memory does fill up; eventually, I run out of it. I would need to use some energy, and then I end up with a bit wasted. And that's where that lost bit goes; it goes out into a big heat puff in the universe. The noise in the background of a reality, that's all erased bits. They're still here, but they're impossible to unscramble from each other.

SPENCER: So the idea is that if, in order to flip a bit from one to zero, zero to one — in order to still follow the law that says that, on average, entropy has to increase — you're essentially having to create entropy somewhere else to make up for that lost entropy, and that's essentially ended up as waste heat in the universe?

ANDERS: Yeah, except that flipping a bit; you can do that for free because you don't need to know what the bit is. You can just change it to the opposite. And that happens because it's a reversible operation. If you do it again, you get back to the original state. So it's only the irreversible operations that matter. The interesting part here is that there are these cool theorems by Bennett and others showing that, if I have a program that is doing a lot of irreversible computations, I can, in principle, turn it into a much larger reversible program and run my computations without at least technically having to pay this entropy cost. I read out the result and then run it backwards and end up with the original state, and I haven't increased entropy, and I don't have to pay any energy cost.

SPENCER: What's an example of an irreversible operation? A reversible one would be like flipping a bit, because you could always flip it back. But what's irreversible?

ANDERS: One easy example is an AND gate. So an AND gate takes two inputs and, if both of them are one, then it sends out a one. But if both are zero, or if one of them is zero and the other, a one, then it returns zero. This formalizes the logical idea of A and B. For that statement to be true, both A and B have to be true. Now when I run this operation, I lose information. If I get a zero out of an AND gate, I don't know whether this was because both inputs were zero or there was a one on one of the inputs. So I lost some information here. I can't go from that output back to the input state. Flipping a bit, that is reversible. And the cool part is, you can build the truth tables and look at the different possible gates. So you can find sets of gates that are universal and it turns out that you can actually construct reversible logic that does the same thing as normal logic, at the price of being a bit more complex and having a lot of extra bits hanging around.

SPENCER: I've heard of this idea. The people actually working on this — of creating reversible computers where every single operation that occurs — they track extra information to make it reversible. So if they were doing the AND gate — you've got (let's say) a one and a zero coming in, and then you get a zero coming out — they have to keep track of some extra information so that you would know that it was a one and a zero coming in, not a zero and a one coming in. And then, if you do that, if you keep tracking extra information, well, okay, you know this is extra cost if you've got to store that extra information, but then every operation can be inverted automatically.

ANDERS: Exactly. So, for example, Toffoli proposed one gate that takes three inputs and, depending on the first input, switches places of the other two inputs. That one is reversible; you get free outputs out and you can run it backwards. And in principle, this doesn't have to cost you energy. The problem here is, of course, the 'in principle' part. When you build something out of matter, everything is imperfect and it turns out that, in order to make these gates run really well, they have to be rather cold, or you need to run them rather slowly. And that's, of course, a problem. We want to have our results from our computer quickly, so we have a time limit that might actually force us to use some irreversible stuff in order to get an answer in a reasonable amount of time.

SPENCER: Do you think we'll actually see reversible computers anytime soon, saving energy relative to the normal type of computer?

ANDERS: I don't know whether we'll get to the reversible soon, but people are certainly forced to look for one because we're starting to notice that our compute is becoming a bigger and bigger part of our energy budget. Sometimes, it's somewhat overblown. Everybody's going on about how AI training is taking a ridiculous amount of energy which, on one hand, sounds like large numbers, but it turns out that (I think) GPT-3 corresponds to a 30-foot steel bridge, which is not a small amount of energy to assemble and make metal. But still, we don't complain that railway bridges are wrecking the climate completely. But it's pretty clear that we don't want our phones to lose battery power because we use too much energy. We want our computers... When the laptop is on our lap, it shouldn't be too hot, and we certainly don't want the cost of a data center to be mostly heating and cooling, or powering them and cooling them. So people are pushing this rather strongly.

SPENCER: I wonder whether a reversible computer would have another advantage, which is that you can essentially unwind operation. So an undo operation is built into the machine. Is that something that's powerful, or is that just a cool party trick?

ANDERS: I think it's a little bit of both. There were probably some security-minded people who say, "Oh, if I can just be certain that no record remains of my computation, that might be great for security." I think it's mostly useful because you don't increase entropy where you don't want it. But again, you have this problem from reality. Sometimes, errors happen. Cosmic rays hit your computer memory and flip one bit. And sometimes a thermal fluctuation makes something change. And then you need to do error correction. And this is, of course, a really annoying, irreversible operation, because now you have to throw away an error. That bit of error needs to go somewhere. And Maxwell's demon shows up with his hand outstretched and wants a little bit of energy. So error correction is probably going to be, in the long run, what is going to cost the most energy, even if everything else is perfect and reversible.

[promo]

SPENCER: One overarching theme I see in a lot of your work is that you're thinking about the limit of things. You're thinking about how far we can push something in theory and then saying, "Well, what does that mean for the future of civilization," not just in five years, but in 5 million years, or for as long as human civilization lasts? And I wonder about that kind of thinking, of how far can we take that to get actual true generalizations about the future versus how much does it end up being just pure speculation?

ANDERS: The reason I like these limits is that it's not necessarily pure speculation. It's very easy to make up stuff about the future, and it's fun. You can build entire careers on being a good hand-waving futurist. But if you want to say something that is true or at least decision-relevant, you want some rigor. And then figuring out where limits are, to me, that feels like here is a fairly solid thing I can lean on. Now, the problem — and people love pointing it out — is, of course, we have been wrong about what the laws of nature are in the past. We have been wrong about where limits are. So I should expect to get some nasty surprises over time, and that's fair. But it still gives us a good starting point, because we know a fair bit about practical limits that look very unlikely to go away anytime soon.

SPENCER: Would it be fair to say that, when you're speculating on the long-term future, what you're using these constraints for is as guideposts saying, "Well, as long as the whole laws of physics aren't rewritten, we know that there's going to be certain limits — like the Landauer limit — and that means that, no matter how good humans or aliens get at doing stuff, they're going to eventually bump up against that, and that allows us to make a claim about something that is incredibly distant?

ANDERS: Exactly. And some of them — like the Landauer limit — are amazing, because they even link something like information processing to energy. And I can start saying, "This volume of spacetime contains this much energy, so you can only erase this many bits across its history," which is incredibly powerful. Now it also focuses a lot of attention on... A lot of things hinge on that Landauer limit actually being a real limit. There is a fair number of people who doubt it. I think they're wrong but that discourse is also very useful because we're actually learning quite important things about the physics of information. So there are many people who are trying to demonstrate that they can do things at a lower cost than the limit thus implied. And typically, it's a magic trick. You have actually paid part of the cost somewhere else in the lab, but it doesn't show up in the experiment. So it looks very much like you're erasing a bit without paying the price. But actually, if you look under the pillow, that's where the price was hidden.

SPENCER: What are some of these other limits that you use when you're thinking about the distant future?

ANDERS: One of the most obvious ones is the light speed limit. And it's an interesting one because it's incredibly fundamental once you understand the setting of relativity theory. It's kind of the weft and weave of how causality in spacetime seems to work. And again, there is, fortunately, a lot of people poking at that and not liking it in the least, and trying to figure out ways of moving stuff or information faster than light which, if it was possible, you can even say some things about the strangeness that then emerges. You also have, of course, the second law of thermodynamics. It's a very powerful principle that disorder tends to increase in a fairly precise manner. We can say a lot of things about that, including that it's statistical, so it's not necessarily always true on a local scale. You can cleverly set up correlations in a molecule to make heat flow from a colder atom to a hotter atom which, normally, on the macroscopic scale, would never be allowed. But you can get away with it if you set things up right. Again, it's a little bit of a magic trick. You set up things so carefully that reality seems to go in the opposite direction. Limits like that are very useful. And then you, of course, have got invariances. Energy conservation is, of course, as astonishingly beautiful and powerful as a way of reasoning about everything. And now we understand that the reason we have energy conservation is Noether's theorem that says, since the laws of physics seem to be invariant on the time translation —they are the same from moment to moment — you get something that must be conserved, and we call this thing energy. Similarly, because we're invariant on the translation, you get momentum conservation, and rotation gives you an angular momentum conservation. This is incredibly beautiful and powerful. And if reality actually works as the theorem implies, we know something profound. The only big problem is that, in cosmology, it turns out that general relativity doesn't conserve energy, so it's a flop on the largest possible scale up there.

SPENCER: These three different limits, rules (whatever you want to call them) — the speed of light limit, the idea that entropy increases on average, and energy conservation — I think they're each worth talking about briefly in turn. Let's go back to the speed of light limit. What is the intuition for why things can't go faster than the speed of light?

ANDERS: Einstein started with a bunch of famous thought experiments. He began by thinking, "What happens if I see a wave of light and move along it at the speed of light?" He realized that, if you could see that — Maxwell's equations that describe electromagnetic waves –- would it work? That wave needs to be always moving relative to what I'm seeing. That was a seed for the relativity theory. Relativity theory basically starts out by the postulate that every observer sees the speed of light, and it doesn't have to be physical light; it's just a particular speed. Now, real light happens to move at that speed because of reasons. If you assume that, then the whole theory comes out, and it has a lot of very powerful symmetries. Basically, it says that needs to be with invariant speed because time and space, they kind of get squeezed when they move relative to each other, but we will need to be something that's invariant, and that leads to this fixation with the speed of light. I'm not certain I explain it particularly well. It's a tricky one because, on one hand, it is enormously simple mathematically, but it's also rather far away from our everyday reality.

SPENCER: If it were possible to go faster than the speed of light, what do the equations tell us about that? I've heard some people claim that that would imply being able to try to move backward in time. Is that actually the implication of going faster than the speed of light?

ANDERS: That's one trick you can do. Relativity theory normally describes how an experiment would look from a frame of reference of somebody moving past at some velocity. And generally, what you see is that, well, it gets squeezed together a bit. They perceive it as being smaller along the direction of motion, and it moves a bit in time. Things spin, slow down and speed up. Now, if you had a way of moving faster than light, you can set up a little loop. So I send a message to you over in another star system, faster than light. Then we have an accomplice that is moving past you at a fairly high speed, and he will say, "Oh, according to my coordinate system, Anders sent this message in the future, and we received it in the past." So he takes that message, puts it into his faster-than-light transmitter, and sends it back to me on Earth. And now it seems like that should arrive back on Earth to me before I sent it. You can play around with this in a lot of ways and, generally, you just get these causality violations. And the typical response is, of course, the physics professor says, "And that is why you can't go faster than light: because it produces a very weird, crazy result." Then again, maybe the universe happens to be weird and crazy. If this is possible, it looks like we can send some information from the future to the past, which would be tremendously useful for computing and a lot of applications.

SPENCER: I appreciate that you're willing to say, "Hey, well, maybe we shouldn't just immediately dismiss that crazy result." Physics has had all kinds of crazy results that turned out to be true, [laughs] so we don't want to jump the gun even if we think that it's probably not the case. What do you think of the approach to explain the speed of light limit as saying that, if you think about how much energy it takes to speed something up, as you make it go faster and faster, it takes more and more energy, and then the speed of light limit is basically just saying, "Well, because it keeps costing more and more to speed something up, eventually you hit a maximum limit, no matter how much energy you throw into it."

ANDERS: I think that is a great way of explaining it, which is also somewhat misleading. It's almost like one of those nice explanations you give to little children to shut them up when they're asking why, which is not entirely untrue, but not entirely true either. The problem is, this is a good explanation of why I can't get my spacecraft to go faster than light, no matter how much motor power I put onto it. But the photons in light, they're already born at light speed. Doesn't seem necessary that they... Well, if I could just nudge them a bit, shouldn't they go faster than light? And maybe I can get some subatomic particle that already starts out faster than light speed. That's, of course, the point where the equations would go really bananas because, suddenly, I get negative energies or imaginary masses. And again, typically, the physics professor smugly says, "Yeah, look at all that craziness. That's, of course, why you can't have those tachyons and things." But again, it's not a guarantee that maybe that craziness is well-behaved enough that you could make a universe that works according to it. It seems tricky to actually get it consistent, especially because many of the other theories in physics start breaking down rather badly once you get these negative energy particles. Because, of course, you can gain energy by making them move even faster. So if you get tachyons, you probably end up with the universe exploding, because you extract energy by accelerating tachyons which, again, sounds like not our universe.

SPENCER: So what probability would you assign to the speed of light limit being a true limit?

ANDERS: I would give it about 99%. I think we have reasons to doubt that it's a perfect limit because of quantum mechanics. The positions of particles is a bit indeterminate so those light cones describing where particles can be in the future, they can't be perfectly rigid, it seems. And of course, general relativity makes things slightly more complicated because, while it locally obeys the light speed limit, inside the curved spacetime, you still, locally, can only move slower than light. You can have wormholes and other solutions that allow you to get from point A to point B much faster than if you travel through flat spacetime. In some sense, that's cheating. But again, adding wormholes to the situation is — if you excuse me — adding another interesting can of worms because now, we again have a problem: how do I set up the wormhole so I don't get the time communication? Or maybe that is an allowed form of time communication.

SPENCER: Right, because if you think about speed as just being the amount of distance traveled relative to the amount of time spent, if you had a wormhole that connected two points in space, you could effectively go faster than the speed of light, even though you never actually get your velocity up faster than the speed of light.

ANDERS: Exactly, and that earlier problem I had about setting up that little time loop between you and me in different star systems, and using that extra spacecraft, that works perfectly fine with wormholes, too. So wormholes can relatively easily be turned into time machines. And again, the physicists have been trying to avoid this rather uncomfortable conclusion. There is the chronological protection conjecture of Hawking saying that, "Yeah, that can't happen; something will intervene." There have been some rather cool attempts of showing that quantum field theories predict that virtual particles looping around these time loops would give infinite energy densities, which probably destroy the wormholes before they can actually form a time machine — kind of the universe conspiring to prevent them from being built — which sounds very nice up until you realize that much of this is conjectures. We can't actually prove this rigorously enough. There are, of course, many physicists that say that those wormholes, you can't actually build them without getting exotic matter, which we don't know exists and actually has a lot of properties we usually assume matter can never have. So maybe the simple solution is that the universe is actually much more boring. It's just normal, flattish spacetime, a bit of a curvature, and then maybe a few black holes sprinkled around, none of that really exotic stuff.

SPENCER: So the idea of wormholes at the tiniest scale, is it known that they occur, or is it merely speculative that they might occur at those tiny scales?

ANDERS: That's a good question. The general idea is that, when you go to smaller and smaller size scales, quantum fluctuations should matter more and more and, eventually, once you're down to the Planck scale, you can't describe spacetime anymore without having a quantum theory of spacetime — which we don't have — at which point the hand-waving intensifies quite tremendously. So one picture people often make is that, oh, yes, down on that scale, spacetime is not like that proverbial rubber sheet, which is, by the way, a really bad analogy that has confused people in "Popular Science" endlessly. But now it's turning into this bubbling foam with a lot of little mini black holes and wormholes and loops and bubbles that are constantly changing and churning, except that we don't know that either. We just know that, below that scale, we need to use some form of quantum gravity theory to describe what's going on. And it is tricky because, if you want to do a measurement of that kind of scale, you actually need to use so much energy that, if you squeeze it into that small corner of spacetime, the theory predicts roughly that, yeah, that should turn into a black hole. Except that this is a quantum black hole, so heaven knows how long that's going to last and what actually happens. Most theories here are very hand-wave-y, which is kind of annoying. There are also people who say, "Oh, this is, of course, where it all turns into pixels." So instead of that rubber sheet that is totally continuous, they start to imagine that, yeah, this is the pixel size of reality. This is where it all turns into little triangles or squares or some more complicated mathematical structure, and everything is discrete down there, which is very nice and neat these days when we're all online and connected and believe in the information software. But we have no shred of evidence for that either.

SPENCER: Is it well known though, more definitively, that particles are popping in and out of existence, like particle pairs at the small scales? Because I thought that we had observed that at least.

ANDERS: It's kind of a funny situation because you definitely get pair production. Take a photon with enough energy, have it pass nearby something like an atom and, quite often, you can get it to produce a positron and electron; particles are popping into reality. At this point, you say, "Yeah, Anders, but you already had a pretty powerful gamma photon you sent past it; it was not particles popping in from nowhere." And the picture of the vacuum is full of its particles coming and going, it's kind of an interesting one because you can kind of see evidence for it; the Casimir effect is a beautiful example. If you take two conductive mirrored plates in vacuum and put them very close to each other, you will feel a force trying to push them together. And one way of expressing that is saying, "Well, vacuum is full of these fluctuations, so it basically has zero energy." But these fluctuations mean that the vacuum inside the pair of plates — where not every particle pair combination and every wavelength is possible, because the walls are there — that's going to be slightly lower than around so you get this pressure pushing them together. Now the tricky part here is, we tend to assume that we're talking about reality here, but actually, this is all about models. Quantum field theory has this problem of describing what's going on. We have very nice models that produce the right results, but don't necessarily tell us that this is what actually is over there. So Richard Feynman, he opened up a lot of doors by doing his famous diagrams. And in all physics textbooks and Popular Science, you see these diagrams with lines corresponding to particles, and then they hit each other, and you get interactions. And you get these wonderfully mysterious diagrams sometimes where you see an electron and a positron popping out from nowhere and then annihilating each other; that diagram is just hanging there. What these diagrams actually are, are mathematical tricks for calculating the field intensities. You basically sum over all possible diagrams that fit with what you try to calculate and each of the diagrams gives you a term in an equation. This is a lot of really weird algebra and, in the end, you sum it. Today, of course, you solve it using a computer algebra system. So we're actually not intended to tell you what is actually going on. It's not like, yeah, those virtual particles are actually really real; rather, they're good mathematical contrivances. Reality behaves as if there were those virtual particles, but they don't actually have to exist to do the job.

SPENCER: Well, there are two different ways you could talk about that. You could say they're not really there; something else is going on. But whatever it is that's going on, behaves identically to that. Or you could say, well, that's just an approximation, but it's a good enough approximation that we can't tell the difference. So this approximate reality that we're modeling is not really what's there. And if you really can measure things accurately enough, we'd realize it isn't even correct. Which of those are you saying?

ANDERS: I think both of them are equally valid. There is this joking term in quantum mechanics, "Shut up and calculate." When people were arguing about the foundations of quantum mechanics back in the '30s and '40s and '50s, and there were all these different interpretations, many people realized that the interpretations are great for having intellectual conversations and maybe late night debates over a glass of wine. But they don't actually tell you anything new about the numbers you want to get out. You want to calculate that, if I send a neutron towards this atomic nuclei, what's the probability that it's going to be absorbed? That's something I can measure, and I can also try to calculate it, and I can even check that, "Oh, I'm getting the right result." Later on, quantum field theory did the same thing. So we can calculate that, if I send a photon into an electric field, what's the likelihood of it turning into a shower of particles? Great. I can get the numbers. And over time, people got very good at getting those numbers. But that doesn't tell us at all what's really going on. So maybe this is all one big computer simulation, or maybe there are actual virtual particles, or maybe it's something even weirder. And the shut-up-and-calculate school, I like it, up to a point. I think it's making an important realization that, in the end, physics must always be testable and actually produce something we can measure and notice.

SPENCER: Right, so if you can't tell the difference between x being true and things just behaving like x, well, then you don't really know if x is true yet.

ANDERS: Yep. But this, of course, gives us a kind of metaphysical itch. Normally, we don't like assuming that it could be that something is true or is not true, but we can never find out. In normal life, that is relatively rare; although, quite often, when you think about social life and people's intentions, we actually do accept a lot of uncertainty. Here, it becomes so obvious, and many of the metaphysical assumptions that show up in the quantum realm are so outrageous that many people say, "No, I want to know whether the many worlds interpretation with its multiverse, or the Copenhagen interpretation — which is the really weird collapse of a wave function — is true, because both of them are suggesting radically different universes. And then there are these annoying theorems showing that, yeah, they're all making identical predictions. So maybe we should say they are just two ways of looking at the same thing.

SPENCER: It's really interesting how, in the history of physics, you could have theories that were very, very accurate at predicting things, but philosophically, were completely wrong. For example, Newtonian mechanics — which proposes this idea of essentially what amounts to a clockwork universe where everything follows deterministically — is a very accurate theory, in many cases. And yet, as far as we can tell, its philosophy of the way the world works is completely wrong. So there's discontinuous nature where even a really accurate theory might completely mislead you about the underlying nature of reality.

ANDERS: Yeah. And sometimes inaccurate theories can also work really well. It's quite common when we smugly talk about how enlightened we are in the modern era, with modern physics, to talk about those poor medieval people trying to predict planetary orbits, and they're adding epicycles. The idea was, of course, that the planets were moving in perfect circles, but then there was deviation. So they're actually on a little circle attached to the big circle. And gradually, people were adding more and more epicycles to fit the ever-better observations. The funny part here is, of course, that this is actually a form of Fourier analysis. This is actually very similar to how we, in many cases, process data today: we take on a big sine curve and then add smaller wiggles to that to fit the data better. And indeed, quite a lot of the medieval people realize that this is probably not real. This is a mathematical contrivance to keep track of where the planets are in our horoscopes. But we actually don't believe that we're actually crystal spheres attached to other crystal spheres. It was only a few people who were really going for the idea that there must be lots of angels sitting around and messing with the crystal spheres.

SPENCER: Rather than being dumb, the idea of epicycles is actually brilliant. It's essentially, as you point out, reinventing Fourier analysis as a general purpose modeling technique that's very sophisticated. It's just that it wasn't reflecting the underlying reality. It could have modeled all kinds of underlying realities.

ANDERS: Yeah, it actually had the problem that you could have fitted any kind of data with enough epicycles. That's actually the beauty of modern Fourier analysis: take any kind of signal and you can decompose it into a lot of components and approximate it perfectly well. Of course, the best theories are the ones that actually have some pretty simple assumptions, and then conclusions that are non-trivial follow fairly straightforwardly from that, and you don't get to vary them too much. David Deutsch, of course, has talked quite a lot about this as, 'good theories are good explanations.' You don't get to change things too much.

SPENCER: Some people have raised this concern with string theory, that string theory may be able to fit any universe you throw at it. And if that's true, then does it really tell us anything about our universe in particular? Or whatever the universe is, you can find some version of it that's going to match. What do you think about that?

ANDERS: I think that sounds like a valid criticism. Now, I never really got the math behind string theory. It seems to have generated endless amounts of absolutely gorgeous mathematics, but very little measurable results in terms of actual physics. It's also a little bit like Stephen Wolfram's attempt at making new physics. His Multiway Systems, insofar as I understand them, are an absolutely gorgeous idea, but again, so general that it probably can fit to any universe. It doesn't necessarily generate our universe with our laws of physics. And at that point, you can either say it's probably not a good theory, or you might say maybe we have a multiverse, and we just happen to be this particular corner of that that fits with our laws of physics. But that usually leaves a lot of people unhappy because, if the only reason the world is like it is, is that we can exist in it, then we don't get much extra physical knowledge from it.

SPENCER: This reminds me of some attempts to make Occam's Razor technically precise. The basic idea is that simpler explanations might be preferred over more complex explanations. And if you have these super powerful tools — whether it's Fourier analysis, as in epicycles, or perhaps, as some people argue, theories like string theory, or perhaps Wolfram's theory — they're such powerful tools that, in fact, they're essentially infinitely complex explanations. And so, in fact, you can't really get evidence in favor of them being true. Whereas, if you started with an incredibly simple explanation for how the universe worked — that just had two parameters or something — and that happened to match the way the universe works, you would suddenly actually be able to conclude quite confidently that that's a reliable theory.

ANDERS: Yeah. And I think that is why people are so confident that something like relativity theory and quantum mechanics have to be part of our future explanations of the universe. Special relativity, it has essentially only two axioms. The laws of physics are the same for all inertial observers, and they all agree on what the speed of light is. That's all it takes to get all the math of special relativity to unfold, which is absurdly beautiful. General relativity has a way messier mathematical background. Seen from one perspective, it's amazingly simple and beautiful. It's basically the simplest second order theory that fits a few easier subjects but, in practice, it's very tough to use. Quantum mechanics can be expressed in four or five axioms. And again, if you change them, you get something utterly different from what we see.

SPENCER: Right. Whereas — I don't understand much about string theory but, as I understand it — with string theory, there are a lot of choices. There's a choice of how many dimensions the world is in but, even more importantly perhaps, there's a choice of the topology of those dimensions, and there's some absolutely ridiculously huge set of choices to pick from that could represent all different universes.

ANDERS: Yeah, when I was doing my PhD and playing around with neural networks back in the '90s, I quickly realized that I can set a lot of parameters in my programs — I can make very complicated models — but they never felt particularly good. I was always feeling good at the end of the day if I made a model with fewer parameters than the results I got out of it when I ran various simulations. And that is, of course, what a good theory is. You don't put in too many assumptions, but you get non-trivial things you can test. And ideally, these assumptions should be simple and even hard to vary. If you can always fiddle with the parameters a bit to fit any data, it's not a great theory.

SPENCER: With that in mind, let's talk about the second really amazing theory that I wanted to touch on, which is the second law of thermodynamics, the idea that, on average, entropy tends to increase. It's a funny theory because it's so powerful and it's used for so many things. But it's also kind of trivial in a certain sense. There's a way of thinking about it where it's just sort of obvious. Do you agree with that, my characterization of it?

ANDERS: Yeah, I think it is. In some sense, you could say the world is moving towards more probable states. That's all there is to it. It's trivial.

SPENCER: Exactly. So tell me if this is correct. As I understand it, the basic concept is that, if you imagine we're in a situation where every state is equally likely — you've got lots of different possible states that things could fall into, and they're all equally likely — then it basically says, what you're going to observe is that, if you have a collection of those states that includes many of those states, you're more likely to get into that collection including many of those states, than if you have a collection of those states that only has a few of those states. And that's sort of really obvious. And if we think about applying this (let's say) to a coffee cup that has coffee in it, and you pour in some milk, and you stir it up. Well, there are many, many, many of those states where the coffee and milk are mixed up, but there's relatively very few where the milk is not dispersed evenly, where it forms a very particular pattern. And so, once you've mixed it up, you expect all the states to be roughly equally likely. And you're almost certainly going to find it in one of the states where the coffee and milk are very mixed up, rather than ones (let's say) that the milk is in a perfect flower shape, or something like that.

ANDERS: Yeah, that's a good way of expressing it. Quite often, when people try to explain it in books and videos, they draw diagrams with boxes and squares, and you have objects moving between them. And they're a bit misleading because they might get the general gist of it. Of course, in normal life, we are surrounded by so few objects. What happens in a cup of coffee is, of course, that you have something on the order of ten to the power of 24 molecules moving around. It's a ridiculously large number; it's impossible to imagine normally. That means that the statistical power becomes so much bigger. If we imagine a game board where you're moving around a few chess pieces randomly, then the probability of ending up in a special configuration is much larger. It's much more likely in such a small system to get these fluctuations, as people say, that restore order. Although, again, if you have a chessboard and just move things randomly, it's very likely that actually, you will never see a return to the original state even for that small case, because 64 positions, that's already a pretty big state space.

SPENCER: Yeah, it's really wild to think that people are still playing chess games that have literally never been seen before in the entire history of the world.

ANDERS: Yeah, it's a beautiful thought actually. When I'm out walking, I sometimes think about, "Oh, I've never been to this particular spot on earth in my life." There are many spots, of course, I have been to a lot of times but, even if I just walk slightly differently across a field, I'm very likely to go to a spot I never miss. But that is so different from these utterly new chess positions, and chess is still a very simple game. Go has a much larger state space, and reality itself, of course, the configurations — even of trivial things like clothing lying in a pile or the ordering of a book — they generate these enormously vast state spaces. The problem is, of course, figuring out what we mean by ordering. When do I notice that, oh, my clothing is actually organized in a pile that has an interesting pattern? Is that suddenly a low entropy state, or is it just that my imagination makes me interpret it? If my books happen to be found in alphabetical order on my bookshelf, is that an amazing coincidence, or is it a sign that, actually, maybe I ordered them?

SPENCER: I heard that (I think it was) Richard Feynman would sometimes go into a lecture and say, "Oh, the strangest coincidence happened to me just a moment ago. I walked by a car that had license plate A1B694C," and people would look around confused. But the concept is that every license plate is equally unlikely, assuming people can't choose it — obviously, in real life, people choose a license plate — but assuming that people can't choose it — if they are randomly assigned — every license plate is equally likely, whether it's ones that seem to have a pattern, like all A's, or ones that seem completely random to us. And so there's something funny about this idea that, as humans, we favor some states very strongly where we say, "Oh, that state's not random; whereas, this other state's random," even if they have equal probabilities if they were generated purely at random.

ANDERS: And one reason is, of course, these ordered states are usually much more useful than the random states. If my books are alphabetical on the bookshelf, I can find one much more effectively with much less effort than if I had to look at all of the books and try to find the one I want to have. If the atoms in a microchip are ordered in a particular way, it's going to be a working microchip, while a random organization is most likely not going to do anything interesting. So we are constantly trying to order the world, or notice patterns of order that we can exploit.

SPENCER: So how do you use the idea of entropy increasing when you're thinking about the long-term future?

ANDERS: The classic way of thinking of it is, of course, the heat death of the universe. This was that big, scary realization people got in the 1800s once they developed the theory of thermodynamics where, oh, energy can't be destroyed or produced. We're just using up useful energy — the energy we can use to do work — and eventually it's all going to be gone, and then everything is going to be at some kind of equilibrium. And that's the end of anything being doable in the universe. So we got this horrifying vision of a dead universe that would then stay dead forever. It was one of those great realizations that made us grow up as a civilization. We realized that, oh, there is actually a really big era of time we might actually have — not a religious end of the world — but just an end of the world built into the laws of physics, assuming a certain set of assumptions, of course. So that is one way of thinking about it. How long do non-trivial low entropy states survive? It turns out that, for very long time scales, this is a real problem. But if you are at a finite temperature, entropy is going to get you. But there are other interesting ways of thinking about it, too. One approach — which is wrong, but it's a fun approach — is to recognize that this talk about entropy doesn't talk about what stuff is made of. It was originally intended, of course, for steam engines and machines, and then later got used for chemistry. But it really doesn't care whether it's atoms or molecules or other things; it's just a general description of states of systems and their order. And at that point, some people realize that that might apply to things like societies. So you got a branch of really wrong history and futures thinking based on entropy. I think it was the historian Adams that wrote this very gloomy history book in which he argued that the physicists have shown that entropy always increases which means, of course, that everything is getting worse and civilization is getting worse. And you can't deny this because this is physics. This is, of course, totally wrong. But that idea has popped up again and again. There have been similar claims, unfortunately picked up by some people in the environmental circles because they're made to try to argue that, look, our civilization is using up useful resources, and we need to stop doing that. But then it led to Jeremy Rifkin, for example, talking about the fourth law of thermodynamics, which doesn't exist. But it was a claim that the material disorder of systems must always increase. Now all of this is wrong because, as long as you have an open system where energy is washing through when you lose waste heat to the universe, you can actually use that energy flow to organize stuff and reduce entropy as much as you want, simply because you're putting the extra entropy out there in the universe, in the background radiation. But there was a surprising amount of really bad futures thinking based on this misuse of entropy.

SPENCER: It seems like a pretty common problem where a result from physics or math is taken in almost a metaphorical sense — but still reified as being true — and then that metaphorical version of it is used to justify things. I think about this with regard to ideas from evolution that were then taken and they said, "Well, this is how evolution works so this is the way the world should be," Or, "The world is basically fundamentally about competition and survival," which I think is really a strategy and really not what evolution is about, fundamentally.

ANDERS: Yeah, Herbert Spencer's Social Darwinism was actually not at all as competitive as it's usually painted these days. But that idea — those misunderstandings of evolution — are certainly all around and quantum mechanics, heavens, all that talk about observers and observables have produced an endless amount of New Age talk about how we are influencing the world, and quite often then hinting that, by observing stuff, we can get it to go in the direction we want to, which is quite the opposite for what quantum mechanics is saying, actually.

SPENCER: Yeah. I've also seen this with Godel's incompleteness theorems, where people will take a metaphorical interpretation of them and say, "Well, they prove that there's unknown things, like love can never be understood," or whatever.

ANDERS: Yeah. I think this is a common idea. We take something that sounds mysterious, powerful, and then we try to tie our other ideas to it. We certainly see that a lot with religion. If you really like something, in a religious frame of mind, you will probably try saying, "Yes, this is why God created it. And this is why it's a godly thing to pursue this thing I like," etc. The problem is, of course, we want good explanations that actually hold up, whether you believe in that religion or not and, ideally, analogies that actually don't introduce too much noise in our thinking.

SPENCER: There's always that trade-off when things are being explained to a lay audience, of simplifying without dumbing down because, obviously, you can't explain the full complexity. And for ideas in physics and math, obviously, to explain it fully would require a mathematical explanation that many people are not prepared to understand. So you're trying to explain it in words, and then there's that translation from math to the words, which is imprecise. How do you think about doing that trade-off?

ANDERS: A lot of the time, I'm just trying to do different explanations, and then I see how much I cringe when I listen to my explanation and see where things go wrong. And sometimes I also try to base it on what I see, where people get things wrong from explanations. Earlier, I mentioned the rubber sheet analogy for general relativity. It's a beautiful way of showing stuff visually. There are a lot of TV shows where you have somebody putting a heavy bowling ball on a rubber sheet, and it bends the rubber sheet, and then he throws a little marble, and the marble orbits the big bowling ball because of the curvature. And it feels like, okay, here we see a beautiful example of a central idea in relativity theory about how mass curves spacetime, except that now, you have a picture of a rubber sheet in your head. The spacetime is made out of material. You can start asking: What happens if I change the material it's made out of? And on Physics Stack Exchange — a site I'm rather fond of — there are a lot of people asking questions like that, and some of them also ask this question. So gravity in this picture is still pointing down outside that sheet; they observe that the bowling ball is curving the sheet because it's, of course, being pulled down by gravity. This is not at all what the analogy is supposed to be about; it should actually work just because that surface is bent. But now you have introduced so much confusion. So these days, physics communicators have been told, "Please don't mention the rubber sheet," which is sad because it's a great first step. But when it immediately starts misleading, I find that very scary when I'm trying to come up with a good analogy because, quite often, you need to test it on a lot of people, see when they get confused, and that can be several steps down the line.

SPENCER: I like your cringe test, the Anders cringe test for whether it's a good explanation. [laughs] But with the gravity rubber sheet idea, the thing about that analogy is that it's trying to explain one aspect of the situation but then, because you've given someone the visualization, they're very naturally going to try to use it to explain other aspects that it gets completely wrong.

ANDERS: Yeah. And sometimes this has these indirect effects that many people believe certain things. Black holes, in general, are amazingly good at generating confusion, partially because they are very strange things, very far away from a normal reality. But they also get filtered through a lot of analogies, many of them done by people who are not very careful or don't even understand it. So then you have an endless number of people assuming certain things about black holes and their properties, which then generates, again, a lot of noise. In the best cases, this means that they ask a question, and you get to set them right in a gentle and interesting way and show them that, actually, what's going on is something even more awesome. But quite a lot of people, they just think, "No, I've seen this several times over now. I totally know what's going on. So if those scientists are disputing it, they're obviously just hiding something or missing my important point." And then you end up with people proposing all sorts of pseudoscience ideas on how to use the rubber sheet of spacetime for anti-gravity.

SPENCER: Funnily enough, I find that when people have crackpot theories of physics, they're usually less weird than the real theories. They're usually trying to make things less confusing than actual reality.

ANDERS: Yeah, I think that's true. It's very rare that you see a crackpot theory that introduces a lot of extra advanced math. Certainly, you see some theories that are full of messy mathematics, and it's a total chaos. But it's very rare that you see somebody introducing very regular, big schemes that are more advanced. Typically, it's all about making it closer to our normal world. And many of the people who claim that Einstein must have been wrong, what they really don't like is that Einstein was saying, "The world doesn't work like it looks."

[promo]

SPENCER: One of my greatest finds in a bookstore was this old book that tried to prove relativity wrong. It was published shortly after Einstein's theory was beginning to get acceptance. It was just a lovely book where he spends the entire book arguing in a way that we now know is false, but fully convinced that he disproved relativity.

ANDERS: It's interesting because theories that are easy to test, they rarely get the crackpots. You get very few people who are crackpots about how water flows because you can just set up a few pipes and hoses and test it. And even if they don't do it, somebody else will, and their wonderful perpetual motion machine is going to fall down. But once you do it far enough from everyday reality, at that point, you can still get that feeling, "Oh, I'm trying to grasp the sublime aspect of physics and clean away the stupid stuff," and I can see the appeal of that.

SPENCER: With the things you can observe directly, you have firsthand experience. You have intuition about them. You can test them for yourself. With many ideas in physics, they're so far removed from your everyday experience. They run so contrary to it and you just have to take people's word for it, unless you want to go get a PhD in physics. You're so many layers away from the actual physical thing itself that you can see why people are tempted to disbelieve. And some of these theories sound so nuts. Like when you describe quantum theory, it sounds like the ravings of a lunatic. It just isn't. It just happens to be pretty close to the way the universe works, as far as we can tell.

ANDERS: But expressing it so it doesn't sound like the ravings of a lunatic or somebody very high on marijuana, is tricky. You could say, "Oh yes, reality is all one," which is true; we're all a state vector in some Hilbert space. And then this moves in an oscillatory manner; well, that's the Schrodinger equation. That already sounds very much like a typical student dorm chat. [laughs]

SPENCER: I like that. [laughs] Reinterpreting all these platitudes as statements about Hilbert spaces.

ANDERS: The funny thing is, sometimes, you can actually do that. It's a bit tricky. Sometimes, you have people who actually grope towards some pretty big truth. They don't have a good language for it, but with some help, you can extract. Sometimes I have a feeling that, yeah, if you're clever enough, you can take the actual ravings of a madman or just a random number generator and produce a beautiful theory. But now you had to put in all the effort; it's a bit like when a friend of mine, Damien Broderick, explained a sentence in Derrida's philosophy to me. That sentence I had said, it's obviously totally wrong, and he explained, "No, what Derrida probably meant was something like this." Half a page later — it was actually a fairly profound thing — but then I started wondering: was that Derrida being smart or Damien being smart?

SPENCER: Yeah, well, I think philosophers of the long past get a lot of benefit of the doubt, probably more than they should. [laughs]

ANDERS: Yeah, that's what I love about analytic philosophy. It's quite often much more boring and dry than other forms of philosophy, but it's attempting to be exact enough that you can actually follow the argument and check that it remains true. Now, since I myself have been doing a lot of ethics, there's still a lot of slipperiness here. But I like when you make things clear, but it's surprisingly hard to do.

SPENCER: I find that one of the trickiest places where things can be slipped in in analytic philosophy are when people are leaning on intuitions, but not necessarily just coming out with the fact that they're leaning on intuitions. Do you observe that? That that's a place where the arguments can be weak but in a way that's not so apparent?

ANDERS: I think that happens quite often in ethics. People are very aware of intuitions, and usually try to bring them out and then come up with some nice explanation why these intuitions actually make sense, or why they actually don't make sense. And usually in the ethics seminar, it's all working up towards the point where the lecturer explains that, given these natural, reasonable assumptions, you need to bite the bullet and accept some outrageously crazy or problematic conclusion. That's great fun. All the philosophers love doing that. And it's even more fun when you get the professor to say, "Oh, I have to bite the bullet and accept this." Or, "No, I need to give up my intuition." General enjoyment. But the problem is, of course, it might be biasing arguments. I've seen a fair bit of arguments in bioethics that are strongly biased by cultural mores or religious views or just general reactions to various things, and then people are working backwards and trying to motivate them, sometimes with great ingenuity. But you can kind of know that this is already coming from a very biased perspective. Wouldn't it be more fun to try to work outwards from something unbiased and see what is right? But then you realize, "Yeah, I can't get rid of my own intuitions either here. What I regard as unbiased is going to be regarded as pretty biased by my opponent." So the best you can do is try to be as clear as possible. It's a bit like earlier, when I was mentioning the physics professor pointing out various violations of relativity theory, who produced crazy stuff. We have an intuition that we're not living in a crazy universe, but that is a matter of degree. Relativity theory — even the Orthodox version of it — is a pretty crazy thing. Already, time and space are getting mixed and rotated in a weird sense when we're moving around. How much less crazy is that than time travel?

SPENCER: Yeah, and there have been fascinating examples, like particle antiparticle pairs where, as I understand it, people just assumed it was some glitch of the math that shouldn't be taken seriously, and then eventually they discovered they actually exist. Is that right?

ANDERS: Yeah, the original story about finding antiparticles came about when quantum mechanics was getting started, and Dirac wanted to understand how electrons behave. That's a basic thing you want to understand as a physicist in the 1920s. And he basically needed to get an equation that described how particles behave that fit with relativity theory. And there are a bunch of funny mathematical problems, but he ended up with the famous Dirac equation, and then they discovered that he got positive and negative solutions to it. That was a bit embarrassing because what are these other things? Basically, he concluded that maybe there is this antiparticle. It took a while. At first, he believed that, "For every electron, I get this positive particle. Maybe that's the proton in the atomic nucleus. Maybe this is a really neat way of handling it." So the first paper was actually suggesting that, "Aha, I found a unifying theory here." Then it turned out that, actually, it can't behave like the proton. And we had the problem of, what is this kind of particle? Eventually, we found the positron. And then, okay, particles have antiparticles, and it all comes out of the math really nicely. But there was a fair bit of confusion, which is usually glossed over today when we're telling the story.

SPENCER: Well, it's a lesson that, sometimes, we've got to take the math seriously, even if the math says very weird stuff.

ANDERS: Yeah, and having a good sense of when it's the model that's just broken and producing nonsense, or the model might be latching onto a pattern in reality and is now saying something very weird. Understanding that difference, that's where you see the sign of a real quality thinker.

SPENCER: Let's go to the third constraint that I wanted to touch on with you, which is energy conservation. Could you tell us a little bit more? What does energy conservation really say, and to what extent is it really an unbreakable law?

ANDERS: It began, of course, with the question: what is energy? That was something that Newton was groping towards. It was somewhat unclear, actually, in his day. It was clear that there were some things like momentum that were very thingy things in his theory. And energy doesn't show up straight away in Newtonian mechanics. But as the years went by, people noticed that some things in these equations always remain invariant. Eventually, after a whole bunch of terminological confusion, we ended up with a concept of energy. And you can define that as things that are conserved in terms of genetic energy, get turned into potential energy and back, et cetera. Or you can be practical, like an engineer says, "Yeah, energy, that is that stuff that I can use to do work. So now I have an energy collecting apparatus. I'm collecting energy. I'm using it to perform work, and then I get some leftover waste heat." Gradually, we got a better theory about what that waste heat was doing, using thermodynamics, and people developed nice equations for how the energy was getting converted. This is great except that, what is energy? Why does it do that work? And it was Emmy Noether who really gave a beautiful answer to that in the early 20th century. By that point, people had realized that you can reformulate Newton's mechanics in terms of forces and masses and accelerations in different ways. They're mathematically equivalent, but they're usually very differently useful for different projects. So Lagrangian mechanics actually makes use of changes in genetic and potential energy, and it's great if you want to describe how stuff moves when they're subjected to constraints, which is super useful if you're an engineer, because you don't want your cog wheels to spin around in space. No, no, they're affixed to your engine, and they're supposed to be rotating, and it bears a lot of constraints, because it's supposed to do something useful. So Lagrangian mechanics was super valuable for that, and then Hamiltonian mechanics shows up and it's mathematically really, really beautiful. And all versions of this classical mechanics fit together very nicely. But it turns out that what Noether discovered was, if you express your physics as something like Lagrangian mechanics — which all the other parts of physics seem to be very nice for doing: you can express electromagnetism in that way, relativity theory and all of that — then if the rules for how things get updated are invariant along some symmetry in the space you're working, you get something that is conserved. So basically the symmetries of the problem, or your theory of the world, they create things that must always stay constant. And this is mathematically very powerful. And for a physicist, it also meant that, whoa, we can start looking for them. And if we discover something in the world that seems constant, there might be an underlying symmetry. So suddenly she set the agenda for much of 20th century physics in terms of looking for symmetries, looking for conserved quantities. People started finding them all over the subatomic realm, and this was a revolution. But the story about energy is that energy is what you get because the laws of physics are the same across time from moment to moment.

SPENCER: Is the right intuition there that, if you have a symmetry like time, that means that whatever equation you're dealing with, if you shift it forward in time, if you shift it backward in time, it doesn't change? So that means that the derivative of it with respect to time — which is just saying, "Well, how much does that output change if you change time to zero?" — and that essentially means that there's some conserved quantity. There's something that is always constant, and that leads us to this parity between each symmetry, leading to some conserved quantity?

ANDERS: I think that is a great way of priming that intuition. I have a feeling that the ghost of Noether is shaking her head, saying, "Not quite." I think we're close enough here. It also has to be a symmetry that is a continuous symmetry. You can imagine a smooth translation across time or space and rotating things while, if you try to use a mirror and just mirror the universe, you still get the valid mirror universe with the same laws of physics, except that people have goatees and are evil in that one. But you don't get the conserved quantity.

SPENCER: Oh, interesting. So you don't get a conserved quantity when there's a discrete symmetry?

ANDERS: Yeah. And the reasons for that, well, I think it has to do with that kind of derivative. The cool thing is, this is a beautiful way of thinking of energy. I remember doing a simple IQ test when I was taking a basic psychology course. It was mostly an exercise for the grad student to learn how to give these tests. And it was one of a simple test where you get asked various questions and you're supposed to answer them. I was generally acing it and feeling rather smug about it, up until the question arrived: what is energy? And that's actually where I started hemming and hawing a lot. I knew far too much, but I didn't know about Noether's way of thinking about it. So in the end, I think I got one out of two points, which annoyed me endlessly.

SPENCER: Well, so you've made up for it since, hopefully.

ANDERS: Hopefully, yeah. And I think the beauty of Noether's way of thinking about it is that now, you can go and look at your theory of physics, because you can, of course, try to add extra terms to your equations to say, "Oh, maybe there are extra fields," and start looking for, are the conserved things based on that? And then you can run off your particle accelerator and start looking for, are there interactions that obey this or not? So this is a very interesting way of testing large groups of theories. The problem is, of course, does reality actually obey these nice equations? We have all talked about what the mathematics says, but reality is under no obligation to obey beautiful mathematics.

SPENCER: Yeah. You mentioned earlier that, at the largest scales, you don't get energy conservation. What's the deal with that?

ANDERS: That is because in special relativity, spacetime is flat. Everything is very symmetric, and you get a beautiful conservation of everything. In general relativity, spacetime is kind of lumpy. You have masses that curve it. But most importantly, you might be in an expanding universe, and that means that the reference frames are no longer equivalent. So if we try to compare the universe today with the universe a few years back, well, the state of the universe has expanded a lot. There is much more space available. That actually breaks the time invariance of the equations. It's a little bit subtle, but the end result is that, now, you end up with energy disappearing. And the most obvious example is redshift. If we look at the remote quasar, the light coming from that has been traveling for billions and billions of years before it reaches one of our telescopes, and it's very redshifted. That's very much the same thing as the Doppler effect you hear from the siren of an ambulance as it's passing by. It gets slowed down because of the expansion. But long wavelength light has less energy, so some energy has disappeared. If you imagine that there is a remote supernova and a blast of light approaching you, now there is less energy if you collect it. Where did that go? And that is because of the expansion of the universe; it just disappears. You can't apply full energy conservation globally. Locally, it's still applicable. If I have a little lab, then all measurements are going to work out. But on the global scale, it doesn't work. And this gets even messier if a spacetime has wormholes and complicated topology, and you can go around different paths.

SPENCER: So how do you use the idea of energy conservation when you're thinking about the distant future?

ANDERS: The useful thing is, of course, that energy can be used to do useful work, at least if you can dump your waste heat somewhere else. I can sum up how much mass energy is available in a region, and I can ask how much of that can be extracted. That tells me essentially, how much can you move masses around? How many bits can be erased in the irreversible computation? How much can you heat up things? etc. That gives me a bound on what activities a civilization can do. Quite often, when people talk about the far future of the universe, the story essentially turns into, "...and then eventually the last stars sputter and die out, and then the rest is darkness," a very gloomy ending. And I'm just saying, "Oh, that's where history actually really begins," because it turns out that you still have a lot of energy around in that universe, even when the stars are not obliging us by shining. Because that mass energy inside old stars, you can still extract it, for example, by dumping it carefully down into a black hole. As it spirals down in the accretion disk, it heats up, and you can put collectors around it, and you can get a lot of energy from that. So you can start thinking about, what is the mass and energy budget of the future, and start asking questions like, "Okay, how much can we do, given what we get in a galaxy? Given the expansion of the universe, how much do we lose when other galaxies move away from us? Should we do something about it?"

SPENCER: That's a pretty wild idea that, even after the stars go cold, that there might be so much energy left that that's just the beginning.

ANDERS: Yeah. There used to be this argument I was having with a lot of people saying, "Look, there can't be any aliens out there because they would just put Dyson spheres around all the stars, or maybe turn them off and use some more effective ways of extracting the energy." We're not seeing that, so there can't be any super civilizations out there. And it always got me a bit nervous. Wait a minute. Why would it be rational to turn off stars? How much energy are stars wasting if you're not collecting it? It actually turns out that, if you fuse hydrogen, you turn 0.7% of the mass into energy. And if you fuse the helium you get all the way up to iron, you get a little bit more. But basically, if you fused all of the sun, you would still only get a surprisingly small fraction of its mass energy as energy. That remaining stuff, however, you can extract by putting it into a black hole. And that is kind of optimistic, because that's most of it. We're talking 99.3% of the mass here.

SPENCER: So they're just not thinking big enough if they're talking about using fusion basically?

ANDERS: It's funny because I think we're moving up a kind of ladder of energy sources. So it used to be that the energy source we had was muscle power. And then we got a bit of water wheels, and then we got to the steam engine. Now, we're burning organic matter and fossil fuels. And then we got better energy sources, both in terms of photovoltaics and also nuclear. But fission, well, it's not that effective compared to fusion. If we could get fusion power, I think that would be great. Much more fuel — even on Earth — and much more effective. But we already got the sun, of course, doing it naturally, and we can imagine building a Dyson sphere around that. But the fusion reactions we're exploiting, that's essentially the last dregs from the actual fusion fire at the earliest stages of Big Bang. That's actually where the nucleons form the atoms, and the quarks join to form the nucleus. That's where most of the energy in fusion was originally released but, of course, fairly uselessly as just high-entropy background radiation. But eventually, I would expect that we move up more and more towards black hole power, which I think is probably about as ultimate as you can get. Maybe there's some clever trick to convert normal matter straight into energy, but I haven't seen many hints that it might work. There is something called sphalerons — which I never understood — which is a very weird solution in the standard model physics that might do it, but I'm skeptical about it. I want to see more evidence for that. But black holes definitely seem to work.

SPENCER: Before we wrap up, how about we do a quick rapid-fire round where I ask you really difficult questions and you do your best to give short answers.

ANDERS: Cool.

SPENCER: All right. First question: what do you think the chances are that a technologically advanced alien civilization exists?

ANDERS: Almost one, I think. The universe is big enough. It's just that I don't expect to meet them until after a few billion years.

SPENCER: What do you think the best solution is to the Fermi Paradox, the idea that we don't seem to be seeing alien civilizations around us?

ANDERS: See my previous answer. I think the universe is very sparsely inhabited because I think most biospheres probably run into trouble evolving more complex organisms, so they get stuck as single-cell prokaryotes. But we'll find out sooner or later if that's right.

SPENCER: What are suffering risks? And what are the suffering risks you're most worried about?

ANDERS: The idea with suffering risks is that suffering might matter morally just as much as extinction, or maybe even more. After all, you can only be dead, but you can imagine being in extreme agony that made you really want to be dead. And the problem might be that, if we expand life across the universe, we might generate much more suffering. So especially if you take an ethical view that says suffering had priority over the pleasant state, it might actually be a really bad idea to spread it around. I'm not of that opinion, but I'm co-authoring a paper with my friend, Assiye Süer , about spreading life in the universe, trying to see, can we come up with a compromise about this? And the simple answer is, yeah, we should probably not spread life into the universe just straight away. We should actually think this through rather carefully, because the stakes might actually be extremely big.

SPENCER: Can global civilizations be virtuous? And what are some of the considerations?

ANDERS: It's fairly common, when we talk about humanity, to say that, "Oh, we're very unwise, or we're an adolescent civilization." And it's quite often expressed, "Does humanity have certain virtues, like patience?" Now I don't think it's true that humanity or civilizations are the kind of entities that can have virtue right now because, in order to be a virtuous person, you need to understand the situation. You need to think about different actions and decide to take a good action for reasonable reasons. The guy who runs into a battlefield and saves somebody who's wounded because he doesn't notice that it's a battlefield — because he's hard of hearing and seeing — he's not a brave person. He's helpful, but he's not brave. The person who understands the situation is a brave person. So I think what we might need to look for is, can we create the kind of coordination structures in humanity and civilization as a whole, that actually means that it is virtue apt, that we can actually say that, "Yes, now we're virtuous." And I'm somewhat optimistic about that. I do think that groups can be ascribed sometimes to have virtues. You could say that's the team that is intellectually honest. Each individual member of a team might have their own views, but the emergent behavior and the joint decisions they make — for example, about scientific papers or how to investigate things — might be in having honesty as a virtue. And I think there might be virtues that only civilizations can have. After all, there are environmental virtues — we should be caring for the environment — but the virtue of not driving species to extinction is not something individuals normally have. That's something that seems to be better ascribed to societies. So I think that, over time, we might have a chance to become a virtuous civilization.

SPENCER: What does trying to align AI to be safe have in common with aligning society or companies to be safe?

ANDERS: The problem of AI alignment is basically that we have these complex adaptive systems that we somewhat understand, but also know have a lot of powerful emergent properties, have powerful optimization abilities. And we're a bit worried that, if we give them the wrong goals, they're going to go off and do dangerous things in the world. But this is, of course, true for many of the critiques you see of societies and markets and companies, because they're also complex adaptive systems. If their objectives are not set in the right way, they might be optimizing for bad things and, in that case, we're in trouble. So you could say that, okay, one might be made out of software and one might be made out of people and pieces of paper, but there might be important isomorphisms between them. On one hand, we might try to think about incentive design. How do we keep companies behaving well? And can we set that up to apply to AI systems that might be quite alien? There might also be interest in structural things about AI systems. Could we use some of those methods and set them as mechanisms inside our governments or markets to keep them going? I think it's very early days of understanding this, but I think it's also very fruitful. I think we can actually get an interesting collaboration between AI safety people and economists and political science people. I myself am working on a book together with Professor Cyril Holm about Law, AI, and Leviathan, where we look at extended cognitive systems in society and how they might be partially cyborgized with AI and the question: how do we solve the alignment problem of these composite systems?

SPENCER: What is panspermia, and how likely do you think it is that it played a major role in the universe?

ANDERS: Panspermia is this idea that there might be seeds of life floating around in space. It was originally proposed by one of the Greek philosophers. I can never remember if it was Anaximander or Anaximenes, but one of them. That's the explanation they gave for how life came to Earth. And this has later been developed into more of a scientific theory by Svante Arrhenius, the great chemist. The idea is that maybe asteroids hit planets and splatter away bacterial spores into space. We know that some meteor materials have moved between Earth and Mars. So it seems plausible that life, if it could survive a trip which can be many millions of years long — but bacteria spores can survive very long, assuming they're protected well enough, and here is, of course, a big debate about whether this actually happens — we might imagine that life spreads, at least within a solar system. And others have said this is likely to happen in stellar nurseries. So maybe life spreads this way. Of course, others have said, even if that is not the explanation of why life arrived on Earth or emerged on Earth, we might still want to make it true. We might want to launch bacterial spores and deliberately see the rest of the universe. I'm finding it a very interesting idea, the ethics of directed panspermia. I'm, in general, rather bullish that this is a good idea. But if you take suffering risk seriously, you might say, "Wait a minute, that might be one of the worst ideas ever." We might want to be rather careful about not creating biospheres that are low value or create a lot of suffering in the universe. There are also some interesting practical questions on just how long bacteria spores can survive, and how do you actually deliver them over interstellar distances?

SPENCER: It's funny you mentioned that because I met a billionaire who literally is trying to work on spreading microorganisms onto other faraway planets. Maybe you should have a conversation with him.

ANDERS: That's great. And my advice from the paper I'm writing with Assiye Süer is basically: Please hold off on that. Please work on the delivery system. It might be a good idea for us to have that. After all, if everything goes pear-shaped on Earth, we might still want to launch some spores to have a second try for life somewhere else. But we might not want to launch them right now. We can afford to wait.

SPENCER: Final question for you: what's something that you hope human civilization will do in the next ten or 20 years but that you're not confident civilization will do?

ANDERS: I'm hoping we would become better at making backups of our data of our culture. Right now, I think we're somewhat in digital dark ages. The Internet Archive is a wonderful institution, but it's a single private institution with a very finite budget. That's kind of weird. We have achieved so much, but we put all our scientific journals in data centers owned by a few corporations. And if they go bust and don't pay the rent for that data center, what happens to the journals? Hopefully, those files are backed up somewhere. But again, there is no cohesive attempt of making sure that even the core information that really matters is actually saved, even less so for the many ephemeral but important things. So much of the internet's history has already been lost, and that is just the last few years. Add to these various gripes, of course, about copyright holders locking down information, etc, it seems like we might want to become much better at actually documenting who we are, what we're thinking, what we're doing. We have a potential for doing that and then putting archives into safe places, both to preserve civilization if something bad happens, but also to maybe give useful ideas and information to future generations. We don't even know what they're going to be looking for. Today, there are people working on decoding ship logs in order to get climate data, because that's useful to fine-tune climate models. And the ancient Sumerians would probably have been rather astonished to find out that a lot of their business mail is now getting debated and used to understand their civilization. So I do think we should be better at storing information, both individually — we all need to learn how to make better backups — but even more on the civilizational scale.

SPENCER: Anders, thank you so much for coming on.

ANDERS: Thank you. This was great fun.

[outro]

JOSH: A listener asks: "What would society look like if everyone reached a state of fundamental well-being as described by Jeffrey Martin; or what if even half the population did?"

SPENCER: Well, people define these things differently. I'm trying to recall how Jeffrey defined it. I think his definition had something to do with a fundamental sense that everything is okay. Like really, you know, no matter what, no matter how bad things are, no matter if you're in pain, you just feel like fundamentally things are okay. How would that change? You know, it's interesting because you'd think that if people had these kinds of "enlightenment" they would act really differently. They might come across very unusual, or maybe they couldn't function in a normal job, or they wouldn't bother to do a normal job. But the people that I've talked to who say that they've had these really deep changes through meditation and things like that, it's not that they would say that it didn't change them, but I think they would say that you wouldn't necessarily notice talking to them. They could still go to their job and do their work. In many ways, they sort of seem to act the same, which maybe is a bit surprising. So on the one hand, it makes me wonder if everyone had this sense of fundamental well-being or deep okayness, maybe people would act pretty similarly. Maybe they would just suffer substantially less. But then on the other hand, I think about, well, how often do we get pushed in our behavior by being afraid of something, right? We don't want to do something because we're afraid. We avoid it. We don't take an opportunity. We don't do something that's valuable. And so maybe people would engage in a lot less avoidance behavior if they had a sense of deep okayness. Maybe they would do the things that scare them more. At least the things that are not dangerous that scare them more, right? Obviously there's reasons to avoid dangerous things, but there are many things people avoid — including myself — that are not dangerous. They just avoid them because they feel anxious about them.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: