Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
February 27, 2025
Is it useful to vote against a majority when you might lose political or social capital for doing so? What are the various perspectives on the US / China AI race? How close is the competition? How has AI been used in Ukraine? Should we work towards a global ban of autonomous weapons? And if so, how should we define "autonomous"? Is there any potential for the US and China to cooperate on AI? To what extent do government officials — especially senior policymakers — worry about AI? Which particular worries are on their minds? To what extent is the average person on the street worried about AI? What's going on with the semiconductor industry in Taiwan? How hard is it to get an AI model to "reason"? How could animal training be improved? Do most horses fear humans? How do we project ourselves onto the space around us?
Helen Toner is the Director of Strategy and Foundational Research Grants at Georgetown's Center for Security and Emerging Technology (CSET). She previously worked as a Senior Research Analyst at Open Philanthropy, where she advised policymakers and grantmakers on AI policy and strategy. Between working at Open Philanthropy and joining CSET, Helen lived in Beijing, studying the Chinese AI ecosystem as a Research Affiliate of Oxford University's Center for the Governance of AI. Helen holds an MA in Security Studies from Georgetown, as well as a BSc in Chemical Engineering and a Diploma in Languages from the University of Melbourne. Follow her on Twitter at @hlntnr.
SPENCER: Helen, welcome.
HELEN: Thanks for having me on, Spencer.
SPENCER: Now, you lived through what I'm sure was quite an intense experience being on the OpenAI board, and I'm wondering, what are some things you learned from that experience?
HELEN: Yeah, plenty, many different things. I served on the board for two and a half years, and that was an incredibly interesting experience. I remember seeing a demo of GPT-4 in the summer of 2022, which is about nine months before they ended up releasing it. That was pretty mind-blowing at the time.
SPENCER: A glimpse of the future.
HELEN: Yeah, totally. I remember they showed us some of the things that knocked Bill Gates' socks off, and our socks were also knocked off. Then, of course, the experience of November 2023, when we fired Sam. The rest is history. Some of the things I learned from that were things I had believed before in the abstract, but were really interesting to observe up close. For example, I think a pretty core dynamic for what was going on in that period was that there were quite a lot of people who had bad things to say about this powerful person but were pretty afraid of what would happen to them if they expressed those views. For as long as they expected the powerful person to continue to hold power, they were going to be unwilling or unable to actually stand up and say anything bad about them. It's sort of this collective action problem, where for each individual person, it's in their own interest to just keep their head down. But then that means that, across the board, this kind of criticism of a powerful person is suppressed. I think that was at play in that situation, but I also think it's a very general dynamic that you see throughout society, and I think, without wanting to get too grandiose about it, it's an enemy of truth and an enemy of justice. It was very interesting, if not exactly fun, to see that up close. More generally, getting to see how people respond to incentives, how people respond to different kinds of pressure — public pressure, media pressure, pressure from people close to them — and how this nicely designed corporate structure, which was really designed to put the public interest first, was not a normal board structure. It turned out that that sort of theoretical structure was not at all sufficient when the rubber hits the road. I think that is, unfortunately, a dynamic we might see multiple times if the technology keeps progressing towards more advanced and more powerful systems, where you have these corporate or governance structures that are supposed to work in theory, but in practice, we'll see if they're able to hold up to the kind of huge pressures that they're subject to.
SPENCER: It makes me wonder whether market forces are just so powerful that it's hard to resist them. If there's huge amounts of money on the line, if there are ways to make a lot of money, that can tend to trump things, even corporations that try to have good governance. It's like, "Oh. All the monetary incentives are pushing towards one thing, it's hard to go in a different direction in practice."
HELEN: I think that's definitely a big underlying issue. Especially for as long as the companies involved think that they're going to need huge amounts of capital to build enormous data centers, if they think they need that kind of investment and the backing of investors, that's going to matter a lot. Another challenging aspect, which is not specific to markets, is that the more stakeholders you need to buy into a decision, the more obvious or egregious it needs to be. If it's about firing someone, the person can only be fired if a large number of people agree that they obviously need to be fired. That's going to be a very difficult bar to hit, and you're probably going to miss plenty of cases where they perhaps should have been fired, but it was not totally egregious and obvious. Likewise, with AI development more generally, we might see things that are actually pretty unsafe or concerning, but if it's not really egregiously obvious, it's going to be difficult to get the buy-in that you need. You might end up not being able to take certain actions that, on balance, might have been a good idea, but just weren't completely obviously the right thing to do.
SPENCER: An experience I once had when I was on a board is that I observed that you could lose a little bit of political capital, or just social capital, if you voted against everyone else. If you had the perception that everyone was going to support something, you might as well support it, because going against it was not going to win. You were going to lose either way. But you kind of preserve more capital politically and socially by just voting with the majority. It created this really weird effect where you could get these super majorities on everything, because nobody wanted to be the one oddball. That actually is a kind of terrible incentive to have.
HELEN: I think there's a version of that as well, which is, even if you don't have a formal vote, you might not even get to a formal vote if you're always trying to work off this sort of consensus principle. If there's an underlying feeling that going against the consensus or needing to have a vote and actually count whose hand goes up is considered to be anti-social or unhelpful, then you get even more of the dynamic you're talking about. These apparent consensuses where, in fact, there's more disagreement simmering beneath the surface. I don't want to say that was something I observed on the OpenAI board, but I think it can come up a lot in regular management situations as well, where people are trying to be nice and friendly and all go along to get along, but then you end up not actually having productive discussions that would be really useful to get into.
SPENCER: Yeah, and I imagine the dynamics are that much more intense if there are actually powerful people you're opposing. It's one thing to lose a little bit of political capital because you disagree with the majority. It's another thing if someone actually might have it in for you if you vote against them.
HELEN: Yeah, for sure. I think with OpenAI as well, I was able to be out in the open more than some other people involved were, because my work and my professional relationships are not purely in Silicon Valley and not purely a part of that tech world. I think I also had a little bit of insulation from some of those dynamics. They still hit me to some extent, but I think a little less than many other people in and around the company.
SPENCER: I think another element of these kinds of situations that often people don't realize is just how much is not known. In other words, there's a certain amount of public information that might be like an iceberg where you're just seeing the tip of the thing, and then there's this huge amount of information that nobody knows about. If you just read the public information, you might get a very different view than if you know the whole situation. There's a lot of details I don't know about this particular situation, but I have observed that this seems to often happen, that when you're close to a thing, you're like, "Yes, the public version really misrepresents the full situation."
HELEN: Yeah, for sure. I mean, definitely. Another lesson for me here was how quickly a public narrative can get set. It was really within hours that there was this certain perception of what this was about. It was about safety versus progress. It was about the board being freaked out, which is just not at all connected to reality. I get that a lot of people were very frustrated that we didn't share more information. Why we didn't is a long, complicated, and really not that interesting story. But yes, it was really interesting to see how quickly and how confidently a story can become kind of the canonical reality when, in fact, there's a lot that isn't actually known.
SPENCER: Some people are also just really good at crafting narratives. That's a skill in and of itself. If there are groups in competition, some are really good at crafting narratives, some are less good. It's interesting to think about who might actually win this sort of narrative war.
HELEN: For sure.
SPENCER: So I know you've done a lot of interesting work in AI that's completely separate from your work on the board. Do you want to tell us just a little bit about your AI work broadly?
HELEN: Yeah. So I've been in the AI policy space starting around 2016. I was working at a large philanthropic organization called Open Philanthropy. At the time, I guess it wasn't that large, and I happened to be working there when they really wanted to scale up their work on AI. I got to be sort of the main person there thinking about AI policy and governance questions, starting in 2016, which was really interesting. As I dug into that space, there are so many different elements of AI policy, but the one that particularly drew me, seemed particularly important and interesting, was these national security angles of AI. That's AI in the military, AI on foreign policy, international competition, and geopolitical dynamics. Because of that interest, I ended up spending some time in China in 2018, which was super interesting. At the start of 2019, I had the opportunity to move to Washington, DC, and help co-found essentially a think tank, a policy research organization within Georgetown University called CSET, the Center for Security and Emerging Technology, and that's where I've been ever since, for the last six years. That has been a really cool experience. We made the bet in 2019 that policymakers, especially in Washington but also elsewhere, were going to need more analysis and higher quality analysis than they were getting about the national security implications of AI, and I think that turned out to be a good bet. Since then, I have been really trying to contribute to decision-making and conversations and policymaking about AI in Washington and elsewhere, based on the research that we do.
SPENCER: There's a lot of talk now about US-China competition around AI, especially with DeepSeek coming out in the news, which, for those that haven't heard about it, was this new model that came out of China that was very competitive in many ways with US models, but it was trained for a lot less money, at least, that's the perception. And so it started making people think, "Well, maybe it really changes the dynamics." Before we get into sort of the details of your views on this, I'd be curious to hear how you would summarize the different perspectives on that kind of US-China race for AI.
HELEN: One thing I think is really important to know when you're thinking about this is that for the national security community in Washington, thinking about the US-China relationship in terms of competition is basically the bedrock of all of their thinking about national security in general right now. So thinking in terms of competing with China, beating China is the core theme of US national security thinking right now, and has been since around 2017-2018. It sort of started to shift. I think sometimes I talk to people who are coming at this from an AI background, and they're like, "Why is there an assumption that it's a race? Here are some reasons you might think it's a race. Here are some reasons you might not think it's a race." An important starting point is to say, "Right now, everything for the US national security community is competition with China." So if they can think about this topic in those terms, they're definitely going to.
SPENCER: Right. So it's just applying a mental framework that they already have for many other things, just immediately to the AI topic.
HELEN: Right. It's sort of the most natural lens to use, and it fits well onto this topic as well. It's a very consequential technology. The US and China are in the middle of a big geopolitical competition, strategic competition, and the US and China also happen to be two of the leading countries in AI. I think the US is sort of unambiguously the leading country in the world. And then if you're going for who's second, probably it's China. You might argue the UK or some other country, but the setup really lends itself to being viewed through this lens of US-China competition.
SPENCER: So what are some of the narratives that you see out there, both in the policy world and also just broadly in the broader community that talks about these things? And then I'm curious to hear where you fall among those perspectives.
HELEN: I think a big narrative that you see, and actually, we've seen at CSET since the very beginning, since 2019, is this narrative of China eating our lunch, or China innovating better than us. China is about to beat us or already beating us, and people will pull lots of different kinds of evidence to support this. I think that's honestly the dominant narrative. Then there's a slight tweak on that, which is, "Well, the US is leading, but China is really hot on our heels." I think a fair amount of work that we've done at CSET has tried to really just go out and figure out what is true here, what we can actually learn about different parts of the AI ecosystem and who is ahead and who is behind, and where you want to be focusing on research leadership versus where the application or diffusion of the technology matters. So I think there are differing perspectives as well, depending on which part of AI you're looking at.
SPENCER: How do you think about being ahead in AI technology?
HELEN: I mostly think that it's very hard to talk about this. It starts to get very hard to talk about AI as one thing and try to have one answer for all of AI. For example, a place that I think China is pretty unambiguously leading is, if you look at image processing and especially surveillance, and that's essentially because, I think, in the US and in a lot of the Western world, when it started to be clear that these AI advances could be used for surveillance purposes, for putting in your surveillance cameras, or using it for voice recognition or other things. In the Western world, I think we had, at best, a yellow light to that like, "Oh, some parts of the government were excited about that, but a lot of civil society wasn't. A lot of the public wasn't. There's a big public debate about that back and forth." Whereas in China, that kind of thing is very much what the government wants to do. The public doesn't really have the ability to push back, obviously, in a much more controlled and controlling society. So if you're just looking at that sort of surveillance use case, then China, I think, is unambiguously ahead. Whereas if you look at other things, it really depends on the question you want to ask. So if we're looking at AI for military technology, and you're interested in the next few years as a time frame, then I think it actually makes less sense to look at sort of the research leadership and who's at the cutting edge of the newest innovations, and makes much more sense to look at kind of applications and diffusion of the technology. So how are they actually being put into practice, whether that be autonomous drones, or whether it be using algorithms or AI systems to optimize logistics, or whether it's about the use of AI in back office processes like hiring or finance, or all those things that militaries have to do as well. And then there's, I think, sort of the military domain. And then there's this whole other set of questions around the really advanced frontier AI systems, where I think it does make sense to look at the research landscape and who is really innovating at the forefront. So, again, I think it really depends on which aspects of AI you're most interested in, and how you go about answering the question of who is ahead.
SPENCER: One perspective that you might see out there is that the models are improving so quickly that the deployment part is less important, because whatever they were deploying a year ago is going to be outdated anyway, and so it's really just the frontier that matters. Do you think that that's misguided?
HELEN: I think it really depends on what question you're trying to ask and also what assumptions you're making. So I think there's a worldview that some people hold where AI is progressing quickly. It's going to keep progressing incredibly quickly. Within a few years, we're going to have AI that's as smart as humans, and then that's going to be used to build even more sophisticated AI, and you're going to have this takeoff that's going to spiral out of control. I think that's a possible future. I think it's a pretty extreme possibility that I don't think is incredibly likely, and I certainly hope isn't that likely. And I think if you're not in that frame of mind, then I do think the kind of application and putting into practice questions matter quite a lot. If part of the reason that you want the US to be ahead is so that the US could prevail in a military conflict, for example, then it's going to matter a ton. What are the procurement processes that the US Department of Defense has, and how do they compare to the Chinese procurement processes? Because if you have a US company with AI that's two years ahead of their Chinese counterpart, but the Chinese military is better at actually taking that and building it into military systems, then the Chinese are going to be ahead on the battlefield. So I think it really depends on the different assumptions. And I think it takes some time to kind of unpick those different questions.
SPENCER: It sounds that you think that the narrative of China being ahead, or at least close to our heels, is probably wrong, at least in some senses. In what senses do you think it's wrong? And why do you think that people are wrong about this?
HELEN: I think it's basically right to say that the US is leading and that China is right behind us. And I think DeepSeek was a good demonstration of that. So a US company, OpenAI, demonstrated a capability with these new reasoning models that kind of think step by step through complicated problems. And then this Chinese company DeepSeek showed that they had just about replicated that within a few months. So I think that China is clearly being a fast follower here. The US will demonstrate something, and Chinese companies and other Chinese organizations work really hard to reproduce it. I think sometimes there can be a little bit of a misconception. I think sometimes people picture the race as a little bit like, for example, cars on a racetrack where if you're in the car that's ahead, and the other car is right next to you, and pulling up beside you, then it could, obviously, just immediately take over. I think other metaphors are a little bit more accurate for this sort of leader-follower dynamic when it comes to technology and innovation, where the first actor, the leader, is really doing the hard work of exploring, trying things, and looking at different possibilities.
SPENCER: It's drilling a tunnel, kind of, and it's a lot easier to follow.
HELEN: Maybe hiking through a snowy forest or something where the first person is trying to look for the clearest path. They're tramping down the snow a little bit so that the people behind them can walk a little more easily. I don't want to exaggerate this. The Chinese companies are still working really hard. They have really smart people, really smart researchers. They're doing really impressive work. But I think people intuitively think of the cars on the racetrack, which means that any moment of hesitation by the US would immediately lead to China zipping ahead. I think that's not really the dynamic we see here. If we're talking about the frontier models, that's why that's a place that I think people sometimes have not quite the right picture in their head. If we're talking more about something like military competition or about using AI to benefit the economy more broadly, then I think it tends to be mistaken because the relevant questions are just so much more about how it's being used in practice, like we've already talked about.
SPENCER: In what areas do you see the US doing better on deployment, where we seem way ahead in terms of just putting AI into practice?
HELEN: There's this guy called Jeff Ding who has written a whole book about this question of what he calls "diffusion." His theory is basically that the US is better at technological diffusion in general than China. He has some really interesting examples, like the US just has more mature enterprise software, for example. More companies in the US have good software ecosystems internally, which make it much easier to add AI into something. If you're already using a good software stack, then adding in AI tools on top of that is going to be much easier than if you have some sort of kluge of different things that you're using, analog, some stuff on paper, and some stuff digital. Likewise, the US has better access to cloud services and more widespread access to cloud services. I think he especially looks at government use of cloud services, for example, where that's kind of the underlying infrastructure that you're going to need to make use of AI in practice. I don't know that I've actually seen in real life domains that jump to mind where you can clearly see a huge difference between the US and China. I think the military domain is one that'll be really interesting to watch over time because that would be one where Jeff Ding's theory would suggest that the US will have an easier time actually putting these systems into practice. But I don't know that we've quite seen that yet.
SPENCER: Have you followed the development of AI in Ukraine, for putting on drones, in terms of battlefield drones?
HELEN: Yeah, a little bit, not super closely.
SPENCER: Some people argue that that's going to be a new frontier in warfare. You imagine going from extremely expensive bombers or something, and instead having 100,000 autonomous drones, and it's just a whole new way of fighting. Do you have a perspective on whether that's actually going to be a new frontier, or if it's just going to be a minor detail?
HELEN: I don't know exactly how it'll shake out, but it's definitely disrupting a lot of traditional ways of thinking about warfare and about what assets you want to have. The Department of Defense in the US is really set up around building these big, they use the word exquisite, very large, expensive, technically intricate pieces of equipment, whether it's an aircraft carrier or a main battle tank or an F-35 fighter jet. There's definitely a big paradigm shift going on in military thinking right now of, "Okay, if you can have a small number of very cheap drones come in and actually cause damage to these very, very expensive, very large pieces of equipment, then that's a problem, that's really going to create an imbalance and make it much less valuable to spend all that money and all that R&D effort into developing those big things." So I think it's a little bit of a question, how will there's a lot of work going on? Work going on on counter-drone measures right now. So how do you take out the drones? Can you jam them? I remember a few years ago there was footage of eagles being trained to take out the drones. I don't think that's going to be the long-term solution, but yeah, definitely a lot of things are being rethought right now in military circles, and I think countries are able to adapt to that more quickly. Ukraine has done really well at just totally rethinking a whole bunch of their battlefield concepts on the fly. It'll be interesting to see. Hopefully we won't see for a long time. Hopefully we won't have a major hot war between the US military and other countries, but we'll start to see some of these dynamics emerge over the course of the next few decades, I'm sure.
[promo]
SPENCER: The idea of a fully autonomous AI with weapons is a frightening concept. It's very science fiction and gives a lot of people the creeps, very understandably. There's this initiative to ban them, calling them slaughter bots. Do you think there's something in particular about autonomous weaponry that we should take seriously and say, maybe we should just outright ban it and try to make it like a Geneva Convention type of thing where nobody uses them?
HELEN: I think it's tough. I share the underlying concern about, "Wow. Do we really want to give the AI weapons?" For sure, I'm on board with that. The thing is, it gets tricky when you start to look at what specifically you would want to ban, how you would enforce that, and that would lead to and look like. For example, the slaughter bots campaign was about autonomous weapons, with the canonical example being drones that are doing some kind of facial recognition or maybe they're choosing their own targets on some other basis. There was this whole process at the UN. Did you hear about the GGE, the group of governmental experts at the UN and the treaty they were trying to negotiate?
SPENCER: No.
HELEN: So there's this long process, I want to say from 2014 to 2023 or 2024, where the UN basically tried to negotiate a treaty to ban "autonomous weapons." First, they had to come up with a definition for what an autonomous weapon is. The line they chose to draw was targeting. Is the system itself, is the AI choosing the target, or is there a human involved in choosing the target? The treaty process never really got very far for a whole bunch of reasons. But it had a real problem with that definition. If you go ahead and ban AI systems from choosing their own targets, you could very easily imagine a system that is sending back requested targets to humans at base, and then the human is hitting okay. If the human operators get used to the AI being really good at selecting targets, they're just going to start matching that okay button. That's going to be essentially the same as having autonomous weapons. It's also going to be quite difficult to verify whether the human is really pressing the OK button or how much judgment they're using there. I think that is not obviously the right place to draw the line. I think it's also not obviously the right place to draw the line because you could have really concerning other uses of AI, things like command and control decisions or battlefield awareness, where if the AI gets something wrong, it could be really damaging, but where they're not making a targeting decision. I think we need to be a little bit more subtle and targeted about how we think about the appropriate use of AI in the military, rather than just going purely for this autonomous weapons definition. One last thought is that sometimes these concerns about using AI in the battlefield can be best expressed with existing laws of war. Laws of armed conflict, international humanitarian law, these are all terms for basically the same thing, which is the set of international laws that say, "You can't kill civilians. You have to distinguish between civilians and combatants. You can't kill indiscriminately. You have to choose your targets proportionally," things like that. Often, the things that we're imagining about what would be bad about an AI system on the battlefield are actually contravening international law. The solution there is maybe not no AI. The solution is to really take international law seriously and comply with it.
SPENCER: What about people who are worried about safety from really advanced AI, where AI might essentially not act in line with its creator's wishes? Do you think there's any special danger from weaponizing AI, or do you think at that point it doesn't really matter whether you give it guns or not?
HELEN: I definitely think you're in a worse situation if you have an AI like that, and it has access to guns, artillery, and nukes. I think it makes sense to have a high bar for the reliability and interpretability of AI systems that we build into military systems. Fortunately, at least in the US, I think they do have a pretty high bar for the kinds of tests and the reliability assurance that they look for in their computer systems. The starting point there is fairly good. But if you have an extremely capable AI that is going against what its creators want, then you're kind of in trouble, whichever way you look at it.
SPENCER: Just to be clear for listeners, the concern there is not so much that the AI one day wakes up and says, "I want to kill all humans." It's that a very slightly misspecified goal with a very powerful AI could lead the AI to doing things that you didn't intend. So you give it an example I like: imagine someone takes a really advanced AI and says, "I want you to make as much money as possible." There are a lot of ways to make money, and there are probably a bunch of ways to make money that you really wouldn't want it to use. Attempts to try to get it to stop once it started might prevent it from making money, but you told it to make as much money as possible. Now it has a weird incentive to hide its behavior from you, or maybe even try to prevent you from trying to get it off because its goal is to make as much money as possible. It creates all of these very complex dynamics that can get scary very quickly.
HELEN: Yeah, and I think there are versions of that that involve superintelligent systems that are way beyond human level. I think there are also versions that are more imaginable or closer to the technology we have today. A version of that that gets discussed in military circles is this idea of if you have AI systems that are affecting crisis escalation dynamics in military settings. There are really difficult things that come up if you're in some kind of relatively small-scale conflict, or there's some kind of tense moment between different countries. How do you handle that? What is going to be perceived as escalatory and not escalatory is very mushy and very subjective. There is certainly concern that if you had AI systems that were somehow dictating what is happening there, you might end up with what gets called inadvertent escalation. The conflict is escalated without either side actually wanting to escalate it. That's another example, or a related example, of a system that is not quite doing almost the right thing, but not quite, and how that could really get out of control in a pretty concerning way.
SPENCER: A lot of talk is about competition with China. But do you think there are realistic ways that there could be cooperation with China in terms of AI, for example, making rules that everyone follows to try to keep AI safe?
HELEN: I think it's pretty rough right now, honestly, with the state of the US-China relationship, which is very poor. We're recording just in the first few weeks of the Trump administration. It's not totally clear to me what the Trump-China relationship is likely to look like, or how that relationship will evolve over the course of this presidency, but certainly the last seven or eight years have not been a great time in US-China relations, and I think are not providing good fertile ground for cooperation to grow out of. I tend to be pretty pessimistic about anything that is really binding, like a treaty or formal agreement of "neither of us are going to do this." There is something that we both really want to do, like we both really want to build this certain kind of AI system, but we're both going to promise that we won't. I think it's going to be really hard to get some kind of agreement like that. I do think there are other kinds of cooperation. Cooperation is not quite the right word, but I think there is a lot of value, for example, in person-to-person technical dialogues. Having US and Western AI experts, including AI safety experts, talking to Chinese AI experts and AI safety experts, and just really kind of exchanging views on what they are seeing, how they are thinking about the space, and what kind of things they are concerned about. I think there's also value in sharing information about protocols that different companies are following or that the government is following. The US, for example, has this political declaration on the responsible use of AI in the military that they put out, and then they got more than 40 other countries to sign onto. I think that kind of thing, just like unilateral declarations of "here's what we're doing, here's how we're thinking about it," can actually be pretty helpful. I tend to be more optimistic about those softer, less ambitious, less binding things, where I think anything that's actually going to involve real trust is going to be super tough right now.
SPENCER: To what extent do people in the government, in the US, feel concern about AI, where they actually think, "Oh, wait, this could actually have really negative societal consequences and something we should really take seriously?"
HELEN: It depends enormously on who you're talking about. The government, depending on exactly who you count, is maybe many millions of people; senior policymakers are still many, many hundreds of people, or even thousands.
SPENCER: Let's say Senate and Congress.
HELEN: In Congress, still, you've got several hundred people with tons of different views. I think there are some people who really are worried about that, and people have a wide range of different kinds of worries. Worries about deep fakes and things like that, worries about privacy, are pretty widespread among members of Congress. I think worries about superintelligent, misaligned systems that take over from humanity are more niche, but I think there are some relatively senior policymakers who take that seriously in the Biden administration. I think there were some pretty senior people who were thinking about threat models in the Trump administration. It's still a bit up in the air. Elon Musk has expressed concern about that kind of thing in the past, but now seems to be hard at work with other things. Others who are advising Trump on AI issues haven't really seemed to express that perspective. So I don't know. It really depends, I think, on who you're talking about, and in many cases, I think it's a bit up in the air right now.
SPENCER: Do you think that there's political will to try to regulate AI in order to protect society from some of its effects?
HELEN: No [laughs]. I think there was a window in the aftermath of ChatGPT coming out where quite a lot of people were shocked, surprised, and concerned. In 2023, there was the most real discussion that I've seen and the most real seeming momentum towards, "Oh, maybe we really do need some kind of regulation here." I think that has really dissipated. Certainly after this election, in the US context, it has really dissipated. It's a different story in the EU. They are implementing their AI Act, which is very big and sweeping, and may turn out to be too big, sweeping, and heavy-handed. In the UK, there was some talk about having a bill specifically on AI. I haven't seen any actual movement on that yet, but there did seem to be interest. So it depends where you're talking, and even in the US, I think there are things happening at the state level that definitely couldn't happen at the federal level. So I guess the answer is a little more mixed than my immediate no, but I think at the US federal level, the answer is no.
SPENCER: I haven't looked into this too deeply, but just from a cursory glance, it looks like surveys in the US suggest that citizens actually have quite a bit of concern about AI. Do you think that's true?
HELEN: Yeah, I've seen that in surveys too, for sure.
SPENCER: And do you think that that realistically could lead to policies where people are like, "Well, the people want AI regulation, or they at least feel safer from AI, and you can use that to help with your election campaign?"
HELEN: Yeah, I think it could. I think especially at the state level, we might see some of that. It's going to be interesting to watch what happens. There's a whole bunch of different AI-related bills getting proposed in different states, so it will be interesting to see where those go and if any get across the finish line. I think it's appropriate to some extent. I do think it's a little tricky. I think sometimes people who are concerned about AI risks really point to that public polling as sort of, "Well, obviously this is a good reason to do things." But I think the public also tends to be really conservative, small c conservative, with technology and innovation, even in ways that I really wouldn't agree with. So for something like autonomous vehicles, for example, I would much rather that we just look carefully at the safety data and put the cars on the road if they're going to reduce crashes, rather than really deferring to public opinion, because I think public opinion is very likely to be used or freaked out about autonomous vehicles for much longer than that actually makes sense, which could cost lives if we're not deploying AVs that can drive more safely. So, yeah, I guess to try and tie that all back together, I do think we will see public opinion being used to try and motivate AI regulation. I think that is probably good to an extent, but I also think it could be taken too far given that the public is often kind of reflexively against new technologies as well, because they're new and unfamiliar and potentially seem scarier than they are.
SPENCER: Do you think it's naive to assume that just because public opinion surveys might show people are in support of something that that can really be capitalized on by politicians by pushing that agenda? Because you might think, "Oh, if people support it, they could get votes by pushing on it, but maybe there are a lot of things they say they care about, or maybe there are a lot of things they would prioritize higher, or just that are more politically salient."
HELEN: I think the political salience is a huge thing. It's like, what will people say in a survey? And then what are they actually thinking about when they're choosing their elected representative? So, yeah, for sure. I think if you look at the surveys I've seen, the number of people who put advances in technology or AI as one of their top issues politically is in the 1% tops. So for most people, this is really not an issue that they are prioritizing or thinking much about when they're choosing who to vote for.
SPENCER: Right. So when they reflect on it, they might feel a certain way, but that doesn't mean they're reflecting on it very often, or that it's coming up day to day for them.
HELEN: That's right. And that might be totally rational. If you ask me what my take is on the design of playgrounds in my area or something, I might have a take, but it's not going to be the key thing I'm thinking about when I vote.
SPENCER: Going back to China for a second, there's obviously a lot of talk about Taiwan and semiconductors. Could you maybe just explain that situation a little bit, just to give the listeners a better understanding of what's going on there?
HELEN: Sure. So we're in this kind of unusual situation where there's this type of hardware, semiconductors, computer chips, that is absolutely foundational to civilization, essentially. It's in everything — chips are in, obviously, your computer and your phone, but also your car and your toaster and your fridge and everything. The unusual part of the situation is that for the most advanced chips, the really cutting-edge, fastest chips with the most transistors packed onto a single chip, those are almost entirely manufactured in Taiwan by this one company called TSMC, Taiwan Semiconductor Manufacturing Corporation. Those chips, the most advanced chips, are especially relevant for the most advanced AI systems as well. So actually, when we set up CSET in 2019, this was a topic that was not really in many policy conversations. It was not a big part of the discussion around AI, around technology, but I want to claim some credit for CSET for actually helping to put this onto the table like, "Look, If you're interested in AI, if you're interested in innovation, technology, technological innovation more generally, then the underlying chips are a huge deal." The structure of that supply chain is really weird right now, where almost all the best chips are manufactured in Taiwan. There are also these other parts of the supply chain that are very concentrated as well. There's one company in the Netherlands, this one company in Germany, which is not the way that things generally look for highly strategic technologies. Then, of course, Taiwan is a very complicated place from a strategic perspective, being this extremely contested island, in terms of China claiming that it's part of China, Taiwan claiming otherwise, and the US having this confusing, deliberately ambiguous policy. So it all gets pretty messy, pretty quickly.
SPENCER: How does this relate to Nvidia? Because I think a lot of people are confused about that; they might think, "Oh, Nvidia, can't people just buy Nvidia chips?"
HELEN: Yeah, yeah. So the distinction is between which companies design the chips and which companies manufacture the chips. The company I just named, TSMC, manufactures, as far as I know, all of Nvidia's most advanced chips. So Nvidia is a really important part of the picture as well. But absent TSMC, they purely do the design piece. If they don't have TSMC to actually build the chips for them, then they're stuck.
SPENCER: How on earth did the world get to a place where there's such a narrow supply of this thing that people care about so much?
HELEN: My understanding of it, and there are people at CSET and outside who are more deeply experts on this, is that there are a couple of dynamics here. Essentially, tacit knowledge is one, and how capital intensive this R&D is the other. Maybe taking them in reverse order, advancing the frontier of building the most advanced chips basically means you have to be a company that is investing enormous amounts of money in R&D every single year. You have to be earning tons of money from your chip sales in order to reinvest that money in experimenting and building the next, more advanced systems. There's a natural concentrating dynamic where, if you have a hundred companies trying to manufacture chips, the best one, the one that's making the most profit, is going to be able to invest the most money in advancing their chips faster, and then they're going to keep being the better company that's able to invest more and more and get further and further ahead. My understanding is that that's basically what's happened with TSMC over the last 10 or 20 years. The tacit knowledge piece is that this is not just a matter of physics, where if you want to learn something about physics, there's lots of stuff written down in books, or you can read textbooks and papers, and that's sort of all you need to know. It's much more like in the natural sciences, much more like biology, where, in addition to all of the book learning, you really need that hands-on experience of how do you actually, in the case of semiconductors, get the machines to work? How do you get your yields high enough? There are huge differences between different semiconductor fabs, which is what they call the factories, in terms of how many usable chips they get for a fixed number of inputs. I think my understanding is that it's that combination of needing to reinvest money in R&D, and that being a sort of self-reinforcing loop, plus really needing to have that tacit knowledge of the best engineers and people on the factory floor teaching each other, that leads to this kind of reinforcing dynamic of everything concentrating in that one company.
SPENCER: I know someone who's worked in the design of chips for a long time, and he was telling me about how they have to learn so much new stuff every few years that your knowledge becomes outdated so incredibly quickly. So it's this constant race to know the newest thing so that you can make the next generation of chip that hits the trend of the curve.
HELEN: It's been really interesting to watch. The US government has been trying really hard to get more of that capacity to build the most advanced chips into the US. So there's the CHIPS Act, which was passed a couple of years ago, which is basically giving huge subsidies to companies that build advanced chips in the US, and TSMC actually has a plant now in Arizona where it's trying to do this. They've imported a ton of workers from Taiwan because it was much easier for them to just bring over a bunch of their best people than to try to train up the US workforce from scratch. They're trying to do both. They're trying to get the Taiwanese transplants to teach their American counterparts, but it definitely wasn't something where they could just train up Americans and have that be enough.
SPENCER: As I imagine you've heard, Sam Altman has been talking about raising huge, many billions of dollars to make chip manufacturing in the US. Do you think that that's something that can actually happen, throwing money at the problem, or do you think that there's so bottlenecked on talent that money isn't even the major factor here?
HELEN: I'm not sure; it'll be interesting to see. I think money is not the only factor — government permitting, construction, and energy are also major considerations. On these other factors, I think there is increasing interest and buy-in from inside the government to try to move through permitting. If you're building a new gigantic data center, then you need all the permits to actually build it. You need to get the energy. You need to get the water. They're incredibly energy and water intensive. My understanding of the Stargate plan that Sam Altman and a couple of others have announced is that they're planning to buy — I don't think they're planning to manufacture their own chips, as far as I know. I could be misremembering that — but they're planning to construct these very large data centers. I think they're planning to just buy from Nvidia and TSMC and others.
SPENCER: Okay, I didn't realize that.
HELEN: I think there have been different versions of the plan, but I think the one that they've announced most recently is mostly about building the data centers and not about manufacturing the chips.
SPENCER: Another thing that's been in the news is regulation about what chips can be exported to China, and there have been different takes on whether that's been successful at giving the US an edge. What's your thought on that?
HELEN: It's complicated, and I have mixed views, mixed feelings, I would say. One piece of it, actually, that I think makes a ton of sense and has been fairly successful is looking at what's called semiconductor manufacturing equipment, or SME. So this is the giant machine that you put into the semiconductor fabs that are used to make the chips. CSET did a lot of work in 2019 and 2020 basically pointing out that this is a key part of the supply chain that is currently based in and certain pieces of equipment are only being manufactured by a small number of firms, and controlling those pieces of equipment forces China to continue to rely on Western suppliers, which is good. So I think that part is relatively straightforward. It sounds kind of technically complicated, but I think the logic is relatively straightforward. It's helpful to have China not be able to build up its own domestic supply chain, and so you prevent them from importing those machines. I think the questions about controlling the chips themselves are more complicated and get pretty wonky pretty quickly, in terms of what criteria are you using to choose which chips, and how are you thinking about who gets to buy them, and for what purposes, and do you include cloud computing or not, for example. So overall, I don't know if we'll look back 10 years from now and think that this was a success. But that being said, I think now that the US has made this a key part of its strategy for competing with China, it makes sense to do it right and to be smart about it. And so I think the first big set of controls were put in place in October 2022, then a year later, they amended them to fix some loopholes and change some of the criteria. I think those changes were productive and were in the right direction. Then in the last few months of the Biden administration, they made some additional changes that were trying to shore it up and make it more systematic. So I think, and I think now that the US government is on this path of controlling chips, it makes sense to stay the course and try to do it the right way, and not to backtrack, even if my overall feeling is a little bit mixed about whether it'll turn out to be right or not.
SPENCER: Recently, when DeepSeek came out and they were able to accomplish much with less training expenditure models that were nearly competitive with US models, a bunch of people freaked out and started thinking, "Well, maybe we thought that so much compute is needed to train these advanced AIs, but maybe that's not true." I've heard different debates back and forth about how cheap it really was to train these models in China. Maybe it was actually more expensive than they let on, or that it seemed at first, but I'm wondering, do you see shifting dynamics there in terms of maybe it's a lot cheaper to train these things? I think there are two perspectives on that. One is, maybe that will reduce demand for chips because you can get the same level of model for cheaper. But another view is, "No, actually, maybe that could even increase demand for chips because maybe we'll find even more things to do with them if we can get models trained to a certain level that much more easily."
HELEN: Yeah, I think there are a few different questions in here that sometimes all get mushed together, but they're actually kind of different. So that last question you asked, I think, is pretty interesting. I would definitely go for the second option you gave: if you can use the chips more efficiently, that is, you can get more out of them, then we're going to want, in total, more chips. I think that's pretty likely. That's definitely what we've seen with things like electricity, for example. When we can produce electricity more efficiently from a certain source, then we want more of that, whether that's solar panels or oil or whatever it might be, because electricity is so useful that if we can get more bang for our buck, then we want more total demand. That's one piece of it. I think, in terms of the US-China competition dynamics, the story of DeepSeek building something that was almost as good or as good as the best US models for cheaper has a few different implications. One implication is that it's going to be very difficult to prevent advanced AI capabilities from diffusing, from spreading out, from proliferating to different actors, because there is this repeated pattern where the first time a company builds some kind of complicated AI system or advanced AI system, it's really hard to do. It takes a lot of computing power. It takes a lot of expertise. Over time, it gets simpler and simpler. You need less computing power. You need less expertise. Things that were absolute cutting edge in 2012, for example, which is when the boom in deep learning took off, are now very easy to recreate. Likewise, building a system like ChatGPT in 2022 was the real cutting edge and was really difficult, but now it's much easier and much more accessible to a larger number of people using less computing power. I think that is one important takeaway from the DeepSeek releases: that is going to continue to be true. But I don't think that means that there's no value in making those big investment bets where you need huge amounts of computing power and expertise, because I think that will continue to be where the frontier is being pushed. It can be confusing to not confuse those two things, but I think they're quite separate. On one hand, how do you push the frontier and make new advances, which I think is going to keep being very compute intensive and need tons of the best researchers, versus how easily can you reproduce that over time, which is going to keep getting easier and easier over the course of months and years after those advances are made.
SPENCER: I know your focus is more on the policy side rather than the technical side, but one thing that I've been wondering about is we saw this result with DeepSeek. Another interesting result that came out very recently suggested that it takes a very small number of training examples to get these models to do reasoning. For example, there was a paper that showed with just a thousand very well-curated examples, they were able to get a model to engage in reasoning where the more time it spent thinking about the thing, the better its results would be. They were fairly competitive with other models that used way more data. This is starting to suggest to me that we actually had models that were sort of, in a sense, much smarter than we realized. A lot of the work is actually just getting them to operate at their full capacity, rather than having to make them much smarter. You have to figure out how to get the model to behave smart, but it already has the capability to be smart.
HELEN: Yeah, I think there's part of that that is very true, and another part that I'm less sure about. The part I'm less sure about is just with all of these models and all of these releases, we always have to be careful with interpreting that. When they say they used a thousand examples and it was as good, it's like, what tests did they use, what benchmarks did they use to say that it was as good? Does it actually perform as well when people get their hands on the system? Over and over again, what we've seen over the last couple of years is people releasing new models and saying, "Oh, this is as good as GPT-4. This is as good as Llama-3," or whatever it might be. Then when you actually go in and use the model, it's like, "Well, it's not really as good. It did as well on this one test, but actually it's not as good." I'm still reserving judgment on that particular paper until I really see that it holds up. That being said, I think the piece that is true is there is so much potential value in the AI systems that we already have that we haven't figured out how to squeeze out yet. In the history of technology, it's often been the case that figuring out how to actually get productive use out of something new can take decades. It really takes a long time for people to experiment and rebuild all their existing workflows and things like that. So, yeah, I definitely think that if, for example, progress in the cutting-edge AI research were to really stall out right now, we would already have decades' worth of implementation and getting the juice out that could still happen with the level of AI that we have today. That could result in pretty significant shifts in how we work and in what the economy looks like, even without further advances. So that part, I definitely do agree with.
SPENCER: I agree with that, but actually it's slightly different from the point I was attempting to make. I was attempting to make the point that you have these base models that have been trained on huge swaths of the internet and books to predict the next token. There's a question of how smart these are fundamentally, where a lot of the additional work to get them to behave really well is about getting them to use the right subsets of the network or getting them to live up to the full potential of the intelligence inside the system, versus, "Oh you have to make much smarter models by training on hundred times more data with hundred times more computation or something like that." These results have suggested to me that there's a lot of latent intelligence in these models that we've already trained that we didn't know how to get out. By giving it the right thousand training examples, you can suddenly unlock a bunch more intelligence that we didn't know how to unlock before.
HELEN: Yeah, I guess maybe that's a different thing. I sort of think of it as being similar to what I was talking about, but maybe it's different. I think something we've seen already over the last few years a few times. One big example that comes to mind is you're probably familiar with chain of thought reasoning. You can just prompt a model, tell the model to think step by step, and it turns out that suddenly it'll do much better at thinking, which is kind of a funny result. I agree with you that there is this dynamic where we're still figuring out what they can even do.
[promo]
SPENCER: So shifting topics entirely, something that you have a pretty unique experience with is working with horses. Do you want to tell us a little bit about that and some of the things that you've learned working with horses?
HELEN: Sure. My friends sometimes laugh at me for bringing in horse analogies to serious situations with humans, but as a kid, I rode horses. I never owned my own horse. My parents made the very smart deal with me that if I stopped asking for my own horse, they would take me to riding lessons. What that meant was that for a long time, I thought a lot about horses. I spent a lot of time with horses. I read a lot about working with horses, and I think there are pretty interesting things you can learn from training horses that apply in human situations. Especially, it's been really especially relevant since I've become a parent. I have a three-month-old and a two-and-a-half-year-old, but I think with adults as well. Maybe one place I would start is, and I guess I should say I've come to a place where the styles of horse training that I most like are pretty unusual. They're pretty alternative styles. This isn't necessarily what is more typical or more mainstream. But one place to start would just be how far you can get by thinking of the horse as a real being with a real perspective of their own. I think this works really well for children as well. In contrast to thinking that the horse or the child is being naughty, disrespectful, or lazy, instead, try to really get inside their head and think about what's going on for them and why they are approaching this situation this way. For a horse, that might mean really understanding that they're fearful in a certain moment or confused, rather than projecting some other motivation.
SPENCER: So the horse is not doing what you wanted it to do, and you could get frustrated and annoyed at the horse, but if you shift perspectives, you're like, "Okay, does the horse not understand what I want? Is the horse afraid for some reason? Is the horse exhausted?"
HELEN: Right. Exactly. This also comes up both with mainstream horse training and maybe mainstream approaches to parenting, like the idea of not rewarding bad behavior. For a horse, maybe it spooks at something because it gets scared. For a kid, maybe your child is demanding attention. There's this perspective where you say, "Don't reward bad behavior," which is sort of a behaviorist idea in psychology, where you're trying to negatively reinforce the behavior you don't want, which I think is a pretty common perspective. But if you're really coming at it from the idea of what's going on for them, then it can look quite different and maybe looks more like trying to prioritize your relationship with them. Instead of punishing the horse for being scared, you try to be more attuned to how it's feeling and help it understand what's going on better. Or with the kid, instead of trying to freeze them out if they're trying to get your attention, maybe recognize that's a real need they have and see how you can restructure the situation so that they feel less need to do something that's annoying. I found that a pretty helpful reframe.
SPENCER: It seems to me that both of these perspectives are very valuable simultaneously. It really is true that if you reward bad behavior, it tends to produce more of it. And yet, it also is really true that if you pay close attention and look at the underlying need of why the bad behavior is happening, you might be able to shift things to a new frame where you can resolve the issue.
HELEN: I think it depends a bit on the situation. In the horse case, people will often want to keep exposing the horse to something that it finds really stressful in order to not reward it by removing the stressful thing. I think that's just the wrong approach. From what I've seen, you can actually get the horse to behave better if you remove the stressful or scary thing. So I think sometimes they are in direct contradiction, but I agree with you that they can also both be valid. For example, I have a toddler and a baby, and if the toddler is doing something that we don't want her to do, like tipping out her drink because she wants attention, I don't want to punish her, but I do want to notice that she is feeling like she's not getting enough attention. So how do I change the situation so that she doesn't need to do something like that to get my attention in the future? Maybe that is bringing in both ideas a little.
SPENCER: I think the example of the horse being afraid is a good case for adopting the horse's perspective. But I've seen lots of situations where, with pets, people reward a behavior they don't want. They don't do it on purpose, but they do it indirectly. A classic example is when the cat jumps on the table. You don't want the cat to be on the table, and then when it jumps down, you give it a treat to reward it for getting off the table. But you've actually rewarded the whole chain of behavior. Now it learns that if it jumps on the table and then jumps off, it will get a treat. I've seen this also with pets, where the pet likes to chomp on your hand. People do it in a way that's fun for the pet. If you just remove your hand and withdraw from the situation, the pet quickly learns not to do it because it doesn't get that fun reward of playing by chomping on your hand.
HELEN: I agree that there are definitely ways that people can sometimes reinforce things without realizing it. That's more the punitive version, where people may behave a little more harshly than would actually be most productive because they're trying not to reward something.
SPENCER: I guess what I would say, and I'm curious if you agree or not, is that there is a real fact that beings have needs. If their needs are not met, they'll get stressed out, they'll be unhappy, they might behave in obstinate ways. If their needs are not met, it's important to figure out their needs and meet them. At the same time, I think it's a real fact about human and animal psychology that operant conditioning is a powerful force. We eat something sugary and it tastes good, so we eat more sugary things, or we touch something and it shocks us, and then we don't want to touch it anymore. We really react to rewards and punishments, often very subconsciously, and our behavior is shaped by them.
HELEN: I think that's right. Maybe the thing that I would add here is around relationship building and also the nervous system effects, which is especially relevant if we're talking about something with fear. I think operant conditioning is certainly real, but if you're talking about your relationship with a child, a horse, or another adult, sometimes treating it too much as a conditioning situation can lose an opportunity to focus on your connection and your relationship with them. This can allow you to more productively shape their behavior, cooperate with them, or do things together in the future. Maybe that's especially important in situations where the so-called bad behavior is coming from their nervous system being activated, their sympathetic nervous system, fight or flight. In those situations, it may be really valuable to try and connect with them, be attuned to them, and get them into a calmer state. That can actually be more productive than purely focusing on the reward and punishment conditioning frame. But it totally depends on the situation.
SPENCER: I think a book that I found very interesting is Karen Pryor's famous book, Don't Shoot the Dog, about operant conditioning and animal training. As far as I recall, she takes the perspective that most training should involve rewarding good behavior rather than punishing bad behavior. I don't know that she absolutely says you should never punish, but mainly you should try to reward. It reminds me of something you just said, which is that if you think about the relationship between the trainee and the trainer, rewards create a desire in the trainee to be part of the training process, whereas punishment is not just punishing behavior; it's punishing the relationship. It creates an atmosphere of unpleasantness and fear. It's like, "Why would I want to train with you if you're going to punish me?" I'm curious how you think about that in terms of whether you feel better about operant conditioning when it's a primarily reward-based program.
HELEN: I think it has a place in the toolbox. I think operant conditioning can miss something, especially with horses, which is timing. I think that matters with humans as well, but we notice it a little less because language can mask the importance of timing. We think we can just discuss something afterwards. For example, my toddler throws something on the floor and it breaks, and I tell her she shouldn't have done that. But I think actually, when you don't have language with horses, you can learn to try to block or prevent something you don't want at the moment. I think that has turned out to be really helpful for parenting as well. It's about looking at how to prevent getting into a situation where I can't really reward my toddler for not throwing something on the floor; that doesn't seem quite right. Instead, thinking about how to find the exact right moment to block or prevent and redirect can be really helpful. I think that relates to another thing I've really internalized from working with horses, which is the idea of your energy or your "aura" being a real thing. There's a version of this that's obviously just kind of woo and not real, but I think there's a more holistic way of thinking about body language. This comes out with things like people saying a horse can tell if you're afraid. Sometimes people think, "Oh, because if you're sitting on the horse, they can feel you tense up or something, or maybe they can smell it." But I think there's a holistic way that something like fear or calmness, assertiveness, can come out in your overall posture and demeanor in a way that really matters. It's been interesting to play with this both with horses, who are very body language-focused creatures, and with people, who I think are more body language-focused than they realize.
SPENCER: And I imagine kids are so soaking up that information about how their parents feel.
HELEN: Yeah, for sure. And kids, especially. I think kids, toddlers, and teenagers are very acutely aware of, for example, what your aura says about what you want them to do. It's not just, are you tense or something, but you really want them to eat that broccoli, and you're really hoping that they leave the piece of cake on their plate alone. Toddlers take great joy in noticing that kind of thing and then doing the opposite and seeing how you react. I think, to me, a really key component, if you're taking seriously the fact that you have some kind of aura or your energy matters, is to be able to come to a place of calm acceptance, a very mindful kind of thing, of being just in the present moment, without an agenda, without a particular plan, or at least being able to set your plan aside for a little while and just be at peace with the present as a starting point. Otherwise, either the horse or the toddler gets affected by what your agenda is.
SPENCER: One thing that sometimes strikes me when I'm around little kids is how much they care about attention. The classic example would be, "Mommy, mommy. Watch me while I do XYZ." They really care about that. I imagine that's another thing they're tracking: what you're paying attention to.
HELEN: Totally, yeah. I think that's definitely true for toddlers. I think about horses as well. That comes in with a huge thing that people often neglect when they're working with or training horses: does the horse feel safe? Really? Again, coming back to this, what's their internal perspective? They're a prey animal, and so they're constantly scanning for threats. It can be really powerful if you can convey to a particular horse that you are not just some human-shaped object that they have to figure out how to deal with, but you're, in fact, taking care of them and paying attention to the environment the way their herd mates would. Their herd mates would, the way other horses around them would, and you're going to notice if something threatening is happening. That can really create a huge amount of stability and security for a horse in a way that they usually don't experience with humans. I think most humans are like, "Well, obviously you know that siren blaring in the distance or that plastic bag going across the arena, they're irrelevant; I don't have to pay any attention to those." But if the horse feels like they have to do all of their own threat tracking, then they're much more tense. I think that's also another way that they're paying a lot of attention to where your attention is.
SPENCER: That's interesting. So how do you demonstrate to a horse that you're tracking threats so it can relax a little?
HELEN: Yeah, I think it can just be as simple as looking at things. If something makes a noise, look where the noise came from. I think it can also be noticing when the horse gets tense and showing them that you realize they are distracted or that you realize they are scared, because that also shows them, in the same way that another horse in their herd would, you're tracking their energy and their stress levels. So I think those are two good ways. There's a cool trainer who posts fun stuff on YouTube where he's really blending human psychology and horse psychology. He also talks about getting a horse to do mindfulness meditation by asking them to repeatedly focus on sort of the here and now. That's a variant of this, where you're also trying to direct the horse's attention in a way that will calm them down.
SPENCER: Oh, interesting. How do you think horses feel about being ridden on?
HELEN: I tend to think that most horses are having a pretty bad time a lot of the time when they're around humans, unfortunately.
SPENCER: Oh, really, just in general, they get stressed out by humans?
HELEN: I think a lot of horses probably, it is an unpopular opinion among people who ride horses, but I tend to think a lot of the typical approaches to training and working with horses end up with them pretty anxious, stressed, and traumatized. I don't think being ridden, per se, is inherently a bad thing for them. I think it's more about being taken away from their herd mates who are looking out for them, and instead being put in situations that they don't understand and being exposed to things that they're worried might kill them, because they're worried that everything might kill them.
SPENCER: They're a pretty neurotic species.
HELEN: Yeah, I think so. I mean, that's how you stay alive evolutionarily. To be paranoid. I think horses that are trained in a way that they feel comfortable with what's going on, and they feel comfortable that the humans around them are looking out for them, can be totally fine with being ridden and have no problem, or even enjoy it, if it's in the same way that some dogs really enjoy having a job, if they know what the job is and they think they can do it, they can find that really motivating.
SPENCER: I remember you wrote an essay a long time ago about making yourself small and how that can kind of project information. Do you recall what you said about that?
HELEN: Yeah, that was an example of this idea of you having kind of an aura, or having energy where, for horses, basically, I think people think of working with horses as sitting on them and having your reins and your legs and stuff. But a lot of important work you can do with horses is on the ground where you're standing and interacting with them. I think that just different ways that you hold yourself and that you move around the horse, you can project more energy. You can make yourself and your aura bigger, or you can project less and make more space for them and be less threatening. What you want to do depends on where you're at. But I think especially people who are kind of analytically minded sometimes neglect in interpersonal situations how your energy, or your posture and body language can affect the people around you. There's even small examples, like, do you ever go into a store and you kind of want to look at something in a part of the store, but there's an attendant standing and somehow occupying space in such a way that you don't want to go near them? To me, that's an example of how sometimes people don't realize that they're inadvertently blocking people from going places, just because they're projecting their energy into that space. You also see it sometimes with two people at a party, and one person really wants to talk to the other person, and the other person kind of wants to get away. But if they're positioned in a certain way where you'd have to go around them to get to the door, somehow their energy can be blocking that space. I don't know, it sounds kind of woowoo when I'm just talking about it in the abstract, but I think if you see it in real life, there are really these sort of force fields that people project around themselves without really realizing.
SPENCER: It reminds me of two things. One is these experiments in earlier days of psychology where they would do things like have two people go into an elevator, but kind of point their bodies not the normal way you would stand, and then they would see what happens when an unsuspecting person who doesn't know about the experiment goes into the elevator. They would have shocking effects on people's behavior, where they would often orient themselves in a very unnatural direction just to match what the other people in the elevator were doing. Totally similar thing. I think the other thing it reminds me of is the book Impro, which is about improvisation, and one of the early originators of a lot of improvisation techniques wrote it. He talks there about how a lot of improvisation doesn't match social dynamics properly. He created a bunch of exercises to match it better. Some of those are around the usage of space. For example, the way that someone who has higher social status might take up more space, or how it might be inappropriate to occupy certain parts of space around the person with high social status, like maybe you want to be in front of them and not behind them, and maybe the side of them is okay. It just makes me think we seem to subconsciously track space in ways that we don't even realize.
HELEN: Yeah, totally. In that blog post you mentioned about making yourself small, I tried to think of how big or small you're making yourself as a different axis of how much status you have. Often, people who are higher status will be making themselves bigger, and people who are lower status will be making themselves smaller to take up less space. But you can play with that a little bit independently of status. For example, if you're someone who is higher status, but you make yourself smaller, that can be a good move for mentorship, where you may be the senior person, you have more experience, you have more authority. If you can make yourself smaller in this kind of energetic way, your personal bubble is a little bit smaller, you can make space for those around you and maybe empower other people a little bit more. I read Impro as well and thought it was a really interesting set of reflections on some other ways that we react to each other that we don't even realize we're doing until you kind of mess with it in an on-stage way.
SPENCER: These subconscious forces that have to do with subtle body language and physical space, we all kind of react to them implicitly. Part of it is genetic, but a lot of it seems like it's probably learned in childhood. They're so powerful and so below the level of consciousness that it's very easy for people to jump into thinking they're magical because it's like, "Oh my gosh. I just saw that person. I had this crazy, strong reaction. They didn't even say a word. How could that possibly be true?"
HELEN: Like love at first sight.
SPENCER: Yeah. Love at first sight is a great example. It seems to me, I tend to think it's all psychological. The brain picks up on way more information than we're aware of, and we've also been trained our whole lives. You see a pattern, and then something happens. You see a pattern, and something happens. We've got this incredible amount of training data that's going into our intuition. It almost feels magical, but I don't think it's actually magical. I certainly see why people think it is.
HELEN: A great way to circle back to the start of the conversation about AI as well, something I'm really interested to see over the next few years. I tend to agree with what you described, which is that we have these very sophisticated processes going on inside our heads of how people are behaving around us, how they're holding their bodies, how they're modulating their voice, and how they're making eye contact or not. We really take in a ton of information from that, and I'm really interested to see when and to what extent AI systems learn to process that kind of data. I don't think it's impossible, but it seems like a lot of AI systems we have right now are primarily language-based. They can do some vision tasks, but they're very much trained mostly on language. I think that's an interesting thing to watch for the future: what does it take to build an AI system that really strikes you as empathetic and attuned to you in the way that a really great listener or a really supportive person is? It may not turn out to be that hard. I'm not sure. Certainly, it seems like in text chat, the models have turned out to do pretty well at making people feel like they're really listening and being empathetic. I think it'll be interesting to see how that plays out in more of a three-dimensional, physical space as well.
SPENCER: And it seems the face recognition and emotion detection on the face from photographs has gotten very good. It's an interesting question of if you add real-time dynamics and movement and context, how much harder is that? You have to add in all those different variables.
HELEN: Yeah. I haven't looked super closely at this. I kind of suspect that the emotion detection stuff is pretty exaggerated versions of a lot of this. If you think about the tests they'll do for developmental assessments, they show a person making a really disgusted face, or a really angry face, or a really excited face. That's one thing, but I think that's pretty different from sitting in a meeting, and your colleague, who you've worked with for a while, you can tell that they don't like something about an idea someone suggests. You can tell from their tone of voice or the way they're sitting in their chair that they're not convinced by that idea. Or, you're talking to your spouse, and you can tell something is annoying them; they're not quite expressing it, but they're a little bit more irked than usual. That's the kind of thing where I feel we don't have great data sets right now. Not that they'd be impossible to create, but it would be a lot of work, probably. I wonder how much of that you could do with the current approach to AI versus needing to dedicate real effort towards it. I could really imagine it going either way, but it'll be interesting to see.
SPENCER: For those listeners that might be interested in AI policy and who want to stay on top of what's happening. How do you recommend that they do that?
HELEN: The first thing that comes to mind would be the CSET newsletter, allowed to self-promote.
SPENCER: We'll put a link to that in the show notes.
HELEN: Great. It's policy.ai, so very easy. Beyond that, there's a whole bunch of interesting sub stacks right now. Jack Clark has a great sub stack called Import AI, which includes both technical and policy discussions. Miles Brundage, who recently left OpenAI, has some really interesting reflections. I think newsletters would be the first thing that comes to mind for me, or you could also look at some websites that focus on these issues. Tech Policy Press, for example, is a really interesting news source that focuses a lot on tech policy.
SPENCER: Thank you so much for coming out. This was a fascinating conversation.
HELEN: Thanks. This was a lot of fun.
[outro]
JOSH: A listener shared the following belief, and I'd like to get your reaction to it. The belief is: "Having children is a necessary part of the human experience."
SPENCER: Well, it depends what you mean by "necessary". I mean, at an individual level, it's not necessary. I mean, you could not do it. Lots of people don't do it. Some people aren't even able to have children. If we think about necessary for the species, then yeah, absolutely. Unless we were able to develop some way of propagating our species without having kids, yeah, it's necessary. We're going to die out otherwise. So I think it's one of these things where that statement, if you take "necessary" one way, it's obvious. If you take "necessary" another way, it's not true. But maybe there's intermediate interpretations as well, like "necessary" meaning if you want to have the whole gamut of human experiences, that having children is a pretty darn important one. And that's probably true. You probably are missing out on some meaningful amount of the human experience if you don't have children. But of course, you don't need to have the full spectrum of human experience. There's probably a lot of other human experiences that many people miss out on. And maybe that's fine. Not everyone's optimizing to have the full gamut of experiences. They might be just looking for a certain type of experience or happy with the range of experiences that they've chosen.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts
Spotify
TuneIn
Amazon
Podurama
Podcast Addict
YouTube
RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: