Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
August 2, 2024
What is "apocaloptimism"? Is there a middle ground between apocalypticism and optimism? What are the various camps in the AI safety and ethics debates? What's the difference between "working on AI safety" and "building safe AIs"? Can our social and technological coordination problems be solved only by AI? What is "qualintative" research? What are some social science concepts that can aid in the development of safe and ethical AI? What should we do with things that don't fall neatly into our categories? How might we benefit by shifting our focus from individual intelligence to collective intelligence? What is cognitive diversity? What are "AI Now", "AI Next", and "AI in the Wild"?
Adam Russell is the Director of the AI Division at the University of Southern California's Information Sciences Institute (ISI). Prior to ISI, Adam was the Chief Scientist at the University of Maryland's Applied Research Laboratory for Intelligence and Security, or ARLIS, and was an adjunct professor at the University of Maryland's Department of Psychology. He was the Principal Investigator for standing up the INFER (Integrated Forecasting and Estimates of Risk) forecasting platform. Adam's almost 20-year career in applied research and national security has included serving as a Program Manager at the Intelligence Advanced Research Projects Activity (IARPA), then as a Program Manager at the Defense Advanced Research Projects Agency (DARPA) (where he was known as the DARPAnthropologist) and in May 2022 was appointed as the Acting Deputy Director to help stand up the Advanced Research Projects Agency for Health (ARPA-H). Adam has a BA in cultural anthropology from Duke University and a D.Phil. in social anthropology from Oxford University, where he was a Rhodes Scholar. He has also represented the United States in rugby at the international level, having played for the US national men's rugby team (the Eagles).
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter! I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Adam Russell about "apocaloptimism", debates around AI, and cognitive diversity.
SPENCER: Adam, welcome.
ADAM: Hey, Spencer. Thanks for having me. I'm really excited to be here.
SPENCER: So what on earth is "apocaloptimism"?
ADAM: Yeah, it sounds like it could be a drunk driving test. Right? Ma'am, sir, pronounce the following word.
SPENCER: Why can't I do it. I'm sober. So we'll —
ADAM: Get out of the car, sir. Apocaloptimism is a neologism that, as far as I can tell, I did come up with. So if someone knows better, please call in — so I don't get sued. It was a term I came up with in conversation with a good friend of mine — Jason Matheny. Trying to capture what seems like a rational reaction to where we live right now, in terms of time (the epic). Both in relevance to the wild complexity of our increasingly interconnected systems and also specifically with AI. Even back in 2020, when we're going through that fun global pandemic, it was obvious that AI was continuing (and was going to continue) to surprise — even people who should know better. I use the term to capture the two camps that I hear out there: the apocalyptics, which we don't have to cover too much because they're loud and increasingly influential, which I think is important. But then also to capture the other camp, which is the optimists — the techno optimists. Both can paint very vivid and equally plausible futures of where AI is going to take us. To me there's no in-between, you can't feel sort of meh about this. I think both of those camps, while they sort of demonize each other, have a lot to be taken seriously. I think we should as a reminder, because we have to get this right.
SPENCER: One way of thinking is AI is going to have a profound impact, whether good or bad. We're kind of caught where we know it's going to be something dramatic, but is it going to be apocalyptic? Or is it going to create this utopia?
ADAM: I think it's also a reminder that you can't sit this one out. There is this apocaloptimistic potential it's really on everybody, in the sense we can have that center of influence to make sure we veer towards the optimistic side, rather than the apocalyptic side. I think part of that is acknowledging the various camps and not dismiss AGI (artificial general intelligence) as a poorly formed, technically impossible, science fiction concept. There is merit to those criticisms. On the other hand, increasingly powerful AIs are being deployed in the world, interacting and sewn into our daily lives in ways we don't even fully appreciate. It's not hard to see how they're sort of right — it could go really wrong. But it doesn't have to be killer robots, either. It could be that we tear ourselves apart. I saw a podcast recently between Noah Harare and Mustafa Suleiman. Suleiman is the CEO of Inflection AI and Harare is a well-known historian who wrote the book, Sapiens: A Brief History of Humankind. They were having a really intelligent debate that I think is best summarized as apocaloptimism. They both represented those different camps, and the solutions are not obvious. That is what I'm trying to capture with that term. It also is a call to action and a reminder. We can either dismiss this in terms of standard historical technology panic (which I think does us a disservice) or we can just assume everything's going to turn out great. That we're setting ourselves up for, I think, an unpleasant surprise.
SPENCER: Some listeners might think, why isn't there a middle ground? Right? Maybe all AI will do is help us be a bit more productive doing the things we're already doing. It can generate art and that's maybe a big deal for artists, but it's not going to affect most people that much. So what's the case that you can't be in the middle ground?
ADAM: You can be in the middle ground, and there are plenty who are, and maybe in the long run they will be the right ones. But I think to reference Harare, in terms of his take on AI – that it really is a different kind of technology. People compare it to the invention of nuclear power and therefore weapons. They compare it to the printing press, and I think those are not right. I think the systems we're building now are on scales and potential capabilities that are completely different. Harare's take on this (and I will always defer to folks like Harare, because he has thought a lot about this in a very cogent way) is that it is the first technology we've created that can not only learn, but can change itself. It's the first technology we've created that can make decisions essentially on its own. It's the first technology we've created that is warning us in the process, and that is both in terms of the data ingested and used. Let's talk apocaloptimism here. We are very optimistic about the value of things like personalized doctors and lawyers — personalized AI agents. To do that AI has to know things about you that also invite tremendous capability for malicious use, persuasion, and disinformation. And frankly, really bad recommendations. Both of those things are equally plausible. You can certainly rationally choose to sit back and see how this plays out, but I don't see how that ends up in a neutral space. That said, it is a fool's effort to forecast how this plays out. Even though, full disclosure, I have helped set up a forecasting platform to try to do exactly that. In that case we are doing crowdsource forecasting, asking lots of people and hoping we can meet their forecasts in a way that allows us to aggregate these very different, disparate signals into something more powerful, and to help us see what's coming. So I'm not going to poo-poo people who decide that it will just be more of the same. I see that, ironically, as being less plausible than the other scenarios, only because of the nature of the technology and the speed of the engineering.
If we had sufficiently mature AI safety science in place right now, I might feel more confident it will work itself out. Mature science meaning the hallmarks of a mature science: standards, methods, body of confirmatory evidence, well developed theoretical models, community of practice, and well established influence on policy. But engineering is outstripping the maturation of science, as it always does. So I don't see a neutral ground there. I would like to be proved wrong. I'd like to look back 20 years from now and say that we were all worried about nothing.
SPENCER: When I step back and look at opinions on AI, I bucket them to a few different camps. There's the pure apocalypse camp, which would be your Eliezer Yudkowsky, who may be the most famous example. He thinks we're going to build AI, it's going to kill all of us and destroy the world. Then you have a camp of pure techno optimism. The great Kurzweil might be the most famous example of this, thinking AI is going to create a type of paradise on earth and we're going to merge with the machines eventually, and so on. I don't know if he actually used the word paradise, but it seems like it's that kind of singularity in a positive sense, creating something much better than what we have now. Then there are the folks who feel more that AI is a tool, but it's not going to be that big a deal. Maybe it improves GDP a little bit. Not a big deal in that it radically alters everything or kills us all. I would put people like Yang Hakuna in that camp, or Andrew Ng, where they work on this technology but don't think it's going to be profound. I think it's going to be important, but not profound. Finally, you have the folks that are very skeptical of the technology, not because they're apocalyptic, but more because they view it as eroding the things of value. These are the people who tend to focus on algorithmic bias and the effects of AI on social media, and how it can make things worse by micro-targeting outrage posts to you and so on. Are there any other kinds of positions that you see out there? Or do you think that covers the space pretty well?
ADAM: No, I think you've done a good job. Those are good quadrants. You can always point to Bruce Schneier who thinks there's additional camp, which are the warriors who feel it doesn't matter if it's good or bad. We have to do this because if we don't, adversaries will and then we're in trouble. That's a little more orthogonal perhaps, to your excellent Venn diagram. I think you captured the takeaway, though, which is this is a call to action, not panic. Don't sit back and relax but instead say the things we want to do. If you want to take Yudkowsky, Andreessen Horowitz, and the techno optimist manifesto seriously, the thing you'd want to do also speaks to the very concerns that TIMNA and other folks have now. To paraphrase Stuart Russell at the recent UK AI safety summit, we need to stop trying to make AI safe and start making safe AI. If you think about what's involved in making safe AI, it involves necessary alignment, ethics and different architectures, rather than just pure deep learning. You need AI that really understands causality and can develop a sense of metacognition that's much more transparent. Those are ideal solutions to the near term problems of bias and inequity. Those are also the same solutions — things you'd want to put in place now — to address potential future existential risk. I walked away from that summit a little bit more optimistic that we're getting over these false four-part dichotomies, where it's either existential risk or focused on the dangers, risks, and perils of today. I think we need to get over that, because the solutions are the same, to some degree. Even the optimists, if you really want to usher in this utopia, you're going to need an AI that is much more capable than what we have today. It is to some degree, just a tool. But it is a profoundly powerful tool, especially when you bring an anthropological lens to this topic. Our approach to safety right now is red teaming, asking if you can get this model to do something we don't want it to do. We are forgetting that human interface element, so that even if the tool does exactly what you want, it is interacting with humans in a way that very few tools have previously. What humans bring to that tool, how it shapes their own behavior and interactions — that's all in play. If you want to help ensure that we veer towards the optimistic side, we would take the same approach to help to avoid the existential risks Yudkowsky highlights. Has Yudkowsky done us a favor, raising the fever pitch about the end of times? I don't know but if nothing else, it has given us something to talk about.
SPENCER: It's really interesting to hear you think there's overlap in dealing with short term issues of algorithmic bias and long term risk of AI potentially destroying the world or creating a one authoritarian government that rolls through AI powers. You say there's overlap here, but are there divergences in those views? What would be the best strategy going forward?
ADAM: That's great. Let's explore that. Someone in the Yudkowsky camp could say, let's shut it all down right now. That's one solution. But I think you can still get to the Yudkowsky version of an AI that doesn't want to kill everybody. To do this, short term approaches can be used that also address things like algorithmic bias, which are items like the ability to audit, the ability to increase transparency — really taking safety and alignment much more seriously. If you get the proportion of resources invested in things like safety and alignment (compared to the engineering of the actual systems), it dwarfs it. I would want to see all those things happen, even if there wasn't this x-risk feature. I do think those are going to be key for preventing this x-risk. I suppose you could come up with a scenario in which we do tackle near-term challenges in AI and we are still surprised by the emergence of AGI — a super powerful intelligence. I would summarize it all as solving for the coordination problems. We are challenged now to coordinate at scales we haven't had to in the past, other than perhaps with nuclear weapons. Even nuclear weapons were relatively constrained to a dozen state actors. Now we have to coordinate more effectively, efficiently and rapidly to address short term risks. People talk about aligning AI with values, but whose values? And that's usually where the conversation ends, because nobody has a good solution to that — whose values do we want to put forward? Should it be the tech bros or the principles of liberal democracy? Should it be more from a global/economical perspective? I think that is a coordination problem that we will need to solve for both short term and x-risks questions. I do see it boiling down to this challenge of coordination. To lay a further apocaloptimistic irony on this, I think we need AI to solve this coordination problem. That's whether you want to talk about the future of governance, or just to help us think through and understand each other. We talked about wanting an AI that's aligned to values, but I'm not sure we have the systems in place now to actively elicit what those values would mean widespread — for example from marginalized communities. I think AI will be critical for getting over that. There is really compelling work by Polis and other platforms that are trying to use AI to almost reverse social media algorithms – which currently is we engage you by dividing you by getting you furious. This is using AI to help figure out what we actually have in common. How do we surface that and move forward? I just want to point out — I am not anti AI. I think we need to build and use AI to help us thrive, if not survive, AI.
SPENCER: Can you unpack that a bit more? How do we currently use AI? How do you see it being used in the future to help us better deal with AI?
ADAM: I think we're just getting started there. There's a lot of good thinkers in the space. Aviv Ovadya talks about violet teaming and is looking at how to use AI powered systems to do exactly that — elicit lots of different opinions from lots of different humans and then aggregate it in a way that reflects what they have in common. What are the principal components of these different values in order to get this kind of discussion happening much further down the supply chain of AI? Rather than building a model and releasing it into the wild, and then saying, "Oh, I wish it didn't do that. I really wish it hadn't been biased against African Americans", instead there is effort to build that earlier on. I hear about Anthropic's work on Constitutional AI — while it doesn't solve this problem, the creative thinking behind exploration of the purpose of a constitution — to build principles upon which many different individuals can essentially coordinate agreement — I think AI will be really important for doing that. I am a social scientist and have leveraged the term "qualintative research". I actually went to DARPA to help motivate the development and use of AI enabled tools to help us understand ourselves better. My sense is, the future social science is using AI, and also needing social science to help us survive AI. Being able to pick up on patterns and understand the world at a macro level — that's the quantitative aspect, things we can measure. We are relatively comfortable with those, we can pick up on trends and patterns. But inside the process we can't lose the nuances. That's a term that Teddy Collins uses. I like it a lot. The nuances of lived experience — which is the traditional bastion of anthropology, the qualitative research. Some of the earliest (and I think best) anthropology is almost bordering on storytelling. Somebody like Clifford Geertz, who's a very famous anthropologist, really transported you to a different culture by telling a good story. But it was very qualitative. Nothing about that lent itself to easy statistics. In simple modeling I feel this is still underdeveloped and I'm thinking hard about how AI might allow us to do that type of qualintative research. Allowing us to capture perspectives and also understand how those perspectives are leading to the more macro level trends we see. Potentially this could provide new interventions and solutions that can reflect those local communities and be operationalized in different ways at that local level – ways that reflect that lived experience. Now I sound utopian, but there are people thinking along these lines. I don't think it's resourced enough yet, as it's hard to say what business model there is. Advances with things with the U.S. AI Safety Institute, they're standing up and hopefully there'll be additional resources pushing towards qualitative AI.
SPENCER: With regard to qualintative — I like that term. What I found in doing psychology research (which is not the same as doing AI research) is that the quantitative is incredibly important. You know that you found something, because it lets you make a measurement. The qualitative is incredibly important, because it helps you understand what you found and also gives you ideas for things to look for. Part hypothesis generation, and part giving insights into what the measurements mean. For example, we ask people a quantitative question — score your answer on a Likert scale, from totally disagree to totally agree. Then we also ask them, "Well, what do you mean by that?" and we were surprised that what they meant was not what we expected. They were interpreting the question differently than we thought.
ADAM: Being at DARPA I was an anthropologist and ended up giving briefings about my programs, usually following Engineering or Material Science. I'd say, "You've heard from the hard sciences", and invariably people expected me to say, "so now you're here for the soft sciences". I would actually say, "Now you're going to hear from the hardest science". As Murray Gell-Mann said, imagine how much harder physics would be if electrons could think. It's not clear, even when they give you a different take on it, that they know why they're doing what they're doing. It could be a function of a particular context. Qualintative to me is that question of how do you capture that context of the actions, beliefs, statements, and data but not lose that meaning, as opposed to just seeing just what the measurements were.
SPENCER: One of the craziest specific examples that happened was when we were trying to replicate an academic study that claimed a pretty decent percentage of all people have psychotic beliefs. That's what the paper said. It was things like believing the television has messages that are just for you, or believing you have insects crawling over your body. We tried to replicate this. In addition to asking people the original study questions, which were basically agreement Likert scale — how much you agree with us — we also asked them to explain their answers. Indeed, we did find people who said the insects were crawling over their body, and we did find people that said the television was talking to them. But their explanations were that they had lice, and that the ads on their television were micro targeted based on their interests.
ADAM: No, not surprised, and that gets forgotten far too often. First of all, bless you for doing the Science Lord's work of replication. That has been a real bugbear from my perspective. One of the programs we launched at DARPA was trying to use AI, but large scale advances in tools to help improve our ability to do both reproducible work and also know what we should believe. How do we assign such a credit score to research? I acknowledge that replications are not incentivized in current academic circles. I was pleased to see Brian Nozick and his group came out recently with a series of multi-site, multi team study in nutrition behavior. They showed if you adopt strong scientific methodology, if you replicate and share your data, and take an open science sort of approach — they were able to replicate these effects within up to 96% effect to the original effect. That just never happens as it is done currently. That's another example where I think AI can be really effective to help us out there. AI can essentially sit in the background and capture the process of what people are doing, to make it more transparent, and ideally more useful for other researchers. It's not just the results — then we're just doing Annie Duke's thing of focusing on the outcomes. When in fact, the process itself is probably just as informative for advancing our knowledge. Certainly if it fails, we don't get an effect, or we can't replicate or we're finding that our original hypothesis doesn't hold — that should not be consigned to the dustbin of science. There's still information there — if we have some way of interpreting why it failed. And that is not incentivized right now.
[promo]
SPENCER: Now that we're on the topic of social science and AI, you mentioned to me there's some concepts of social science you think are really helpful in helping AI go well. What are those?
ADAM: Well "go well", or at least complicating the picture a little bit in a productive way, is where I'd put it. Unfortunately, that is what social scientists do — complicate the picture. To your point earlier about your psychology study, if you actually start talking to people, the picture suddenly gets a little bit murkier, right? They won't solve the problem, but I think they are interesting ideas. I'm drafting this up by the way, so this is poorly formed right now. One example of a useful social science concept is derived from an anthropologist named Mary Douglas, who in 1956 or so wrote a book called Purity and Danger. She took a very structuralized perspective to culture, the belief that structure of a culture predominates, as opposed to just wild agency. Here if you get into the postmodern world there's a sense everything's unstable, it's all at play. So you can't really talk about structure because it's either not a real thing, or it's violence being perpetuated by the powerful. I'm not dismissing this, by the way, but Mary Douglas' approach led to this concept of what's called "dirt". By "dirt", she meant matter that is out of place. She does this by analyzing a number of different things like cultural taboos, proscriptions on certain kinds of foods, for example, in Judaism in particular and other areas. Douglas asked why these things are considered dirty. Or in some cultures these same things are falling in-between categories attributed to really powerful capabilities, bordering on supernatural or potentially dangerous. Her take is, this matter that's out of place is stuff that's falling in-between categories. It is calling into question and blurring things that are cultural categories used to make our way through the world. We're constantly exposed to this onslaught of sensorial bombardment and we cannot see the world as it really is, as it would crush us. So we go around and categorize things.
The debates I've been hearing around AI sort of hint at this "dirt" status, this "matter out of place" thing for AI is relevant. Because if you think about what AI is doing, it's calling into question some really important and pervasive cultural categories. There's the obvious human machine. AI is now increasingly able to look like a human and act like a human, which calls into question our own sense of self. This is partly why people say we will never get to AGI because it will never be able to do X. First of all, that X is usually superseded very quickly, so we'll move the goalposts. Secondly, the reason we're saying that is because, then what do we have? If AI is able to do these things that we consider to be of unique purview or the domain of humans, then what makes us special in that sense? And that's true, along with a number of different categories, to include things like self and other. AI increasingly knows you better than you know yourself, so that calls into question how much of a bounded universe of rationality and decision-making you are. It's true of things like consciousness, for example. It calls into question our own limitations and understanding even what consciousness is. I think we need these categories to help us make it through the world. AI is calling those into question pretty quickly, which means we tend to attribute it to it. Some people attribute deeply powerful and potentially dangerous attributes. Other people, like techno optimists, think it is almost like a gift from God in some way. That AI is going to solve all these problems in some magical way. I don't mean to over caricature the really intelligent debates occurring. There is, to me, this undercurrent of really important social science concepts I think would help us better understand the context and perhaps some of the nuances of what's going on there. It does then lend itself the question of where we are headed. If AI continues to disrupt these categories, we will do everything we can to re-impose those categories. At some point, we made these new categories and this is why I think there's some value to an intelligent neologism. If you can come up with a new term for something, you've suddenly enabled yourself to think about the world in a slightly different way. That may be more important than ever, especially with AI.
SPENCER: Let's unpack this dirt concept a bit more. I want to make sure I understand it. Is the idea that there are things that fall cleanly in categories — man or woman? There is a person who's tall and hairy, identifies as a man, has a penis, and so on — that's a man. Then you've got a person who is somewhat shorter, has less hair, identifies as a woman, and as a vagina — that's a woman. Then the idea of dirt is the concept that some things will fall between categories, and that makes people uncomfortable? Someone who says I'm not a man or a woman — some people are cool, but other people are going to say, that weirds me out. That makes me uncomfortable, because they don't know how to classify you in the classification system in their mind. Right?
ADAM: I think that is spot on. I think a lot of the debates that you're seeing, not just with AI, but with the many different ways of self-identifying nowadays — it's interesting how people are. Many people have a visceral response to that, even though there's no rational reason to have that response. But invariably it is because there's something dangerous about this, or something impure, as Mary Douglas would say. Something unnatural. The thing about anthropology is, you have to give up the notion of what's natural, pretty early on. Not to say we're not products of evolution — I am not disputing that at all. I'm a big fan of gene culture/coevolution. Our genes predispose us to certain kinds of behaviors, which then lead to the development of culture, which in fact, can then select for certain kinds of genes — so on, so forth. I'm not saying that nature is not in here, but I am saying as a species, we have really removed nature as the key determining variable. Instead we built these worlds for ourselves, partly to address coordination problems. Our tools and new genders, new self-determinations and even cryptocurrency (for example) are to some degree, dirt. Cryptocurrency is very dirty, because it calls into question both the value of money that's blessed by the nation, but also what even money is. Money is a collective fiction, and these things that surface highlight how unnatural things are, or rather, how much of this is socially and culturally determined. This really strikes at the heart of ourselves sometimes. I think it's going to be interesting to see how that plays out. The problem being that oftentimes disruptions of categories can lead to pretty aggressive responses. There are many people who feel that it is non-negotiable that the world falls into man and woman, and it must, it will and I will use whatever levers are available to me. Unfortunately, in some cases violence is used to impose these categories, partly because I believe they're natural. That really does concern me in terms of our bounded rationality, and how we make these kinds of decisions.
SPENCER: It sounds like you're proposing what I think is an interesting idea, that people really resist categories being violated. Categories that they deeply have ingrained in their culture and in their minds. It makes them maybe deeply uncomfortable when those kinds of categories get violated. The term dirt, I feel is an odd choice of words. It sounds like it's denigrating. Do you know, why is it called dirt? What does the word "dirt" refer to there?
ADAM: Mary Douglas uses it in her book, Purity and Danger to open the discussion. That's why I prefer to use the term "matter out of place", because it is true. There are also clearly things that fall in-between categories and culture that are given really positive nuances and attributed very powerful, beneficial or benevolent attributes. You are right, it does come off as way too pejorative. I think you're already touching on a really interesting cultural category, which is, why does "dirt" have to be bad?
SPENCER: That also makes the choice of the word "dirt" interesting, because dirt outside is great, right? It's where we put our plants. It's dirt that's in the wrong place — dirt that's indoors and your response is "oh, that's dirty". But if it was dirt on the ground, you'd think that's exactly where it's supposed to be.
ADAM: That is the point. It depends on where it is to determine what you attribute to it and what you do with it. I think that's the point, that things fall in-between categories. She has this great analogy of you're walking down a hallway in a house and things belong in their own rooms. Things that are getting out of their rooms and are in the hallway, are matter out of place. Again, I wouldn't emphasize the term "dirt" here. In this case, she's using it as an example of ways that taboos arise. She doesn't explain how taboos arise, which is another interesting question. I have been thinking about how we actually use AI's sort of "matter out of place" status to actually build taboos. In some cases, around potential malicious uses of AI or instances where it's hard to regulate or build laws that can anticipate both the development of technology and what it can do. Are there ways that we could leverage social science and these sorts of constructs to build taboos, so that it's unthinkable that somebody would do X? The challenge I have is, there are very few universal taboos. I mean, there's incest, and to some degree, every culture has certain food taboos such as you don't eat each other as cannibals. If you're American, you tend to not eat dogs, but that's not true everywhere. But how do we create taboos around that? You don't use AI to build a biological weapon. You just don't. I don't think that's a solvable problem. I'm not even sure it's entirely doable. I think it's an interesting question of, can we use that intersectionality, that "matter out of place" to our benefit when it comes to trying to survive AI.
SPENCER: I think you're suggesting the reason that AI is dirt, according to this concept, is because it's not quite software, but it's not quite a person. It's not exactly dumb, but it's not exactly intelligent, and so on. It's just hard to place, as it's not the sort of thing that we've had to deal with before. Is that right?
ADAM: Yeah, I think that's right. The reason why that's relevant is because it calls into question our own sense of ourselves. So AI cannot be creative. Anyone who says AI can be creative, I actually refuse to acknowledge. If AI can be creative, then what about this almost Cartesian mysticism of my creativity stemming from some inexplicable source inside my own brain. Yes, I'm going to admit that neurobiology plays a role, but it's still somehow this thing that I own as a human. If a machine can be creative, then it calls into question my own understanding (plus my own uniqueness) when it comes to creativity. I don't actually comment at all on the question of whether AI is or is not creative. I think it says more about us than about AI.
SPENCER: Reminds me of an interesting exchange I had with an artist where they were criticizing AI art, and I gave them a little game where I gave a bunch of pieces of AI art paired with "real art". I chose them to be similar in theme and asked which art did you like better? Not asking them to guess which is AI, but instead what art do you like better. They didn't know which was AI and I didn't tell them. I think on some level they rejected this whole premise, as I realized after talking to them more was that it didn't even matter to them, which they liked better. There was something wrong with AI art, independent of how pleasing it was, or how interesting it was to look at.
ADAM: That's really interesting. I hope when you were talking to them, you pulled the threads like you do here and said unpack that for me. Why is it just sort of inherently wrong? I think at some point you'll get to the point of, it just feels wrong. In anthropology that feeling is usually a sign you're bumping up against a cultural category — when things feel wrong. That is true even for things like free will and determinism. We are creating machines that appear to be able to make decisions that we can't predict or even necessarily engineer. That begins to call into question, could we ourselves be some deterministic outcome or product? I'm not in that camp. Robert Sapolsky's most recent book, Determined: Life Without Free Will, looks at the neurobiological influence on human behavior, and he has now come to the conclusion there is no such thing as free will. You can't locate free will anywhere in this chain of causality. I think he goes too far, but I sort of say that because it feels like he's gone too far. I have to put myself back a little bit and be a little more reflexive and think, is that because I refuse to believe that I don't have free will? There are other sorts of social implications, Robert goes into those, looking at what jurisprudence would look like if we actually believed in certainly less than free will. We punish people for the things they've done. In theory, if you don't believe in free will, you would take a much larger, sociological view and say, you're only going to put people in prison if there's reason to believe they can't not do the thing you don't want them to do. It also would imply you have very different interventions to try to change those behaviors. This tautology goes back to the question of who makes those interventions? How do we make those decisions? I don't want this to descend into a freshman dorm hallway conversation, but I use it as an illustration of how at some point, that just feels wrong to me. I don't know if that is what tells me that I'm in an area that's dirty.
SPENCER: It is interesting to plant yourself, right? Noticing that you might have resistance to these category breaking ideas and trying to be mindful of that.
ADAM: Now, anthropology will do that to you. In some sense you end up with this weird — if you do it right, if you take seriously the implications of things like social science and anthropology, you really do get to the conclusion of our almost borderline pathological individualism in the West. There are pluses obviously — the belief in the great man, the lone genius, the sole inventor. I think now with our systems increasingly interconnected, I don't know if that individualism is going to serve us very well, especially for the coordination problems. If we really do want to accelerate our ability to innovate in these spaces and in all these problems, I think focusing on the individual could be really misleading and may not be the most productive way to go.
SPENCER: Let's talk about that. What is the alternative?
ADAM: I mean, again, God bless your podcasts. You've had a lot of people on there, I think, certainly smarter than I am. I thought a lot about what are alternative mechanisms that leverage collective intelligence — again, wildly under-formed idea here. I have been thinking about what that would mean if we moved away from the agents as the focus of our attention, and thought more broadly about the collective — that's a term we can unpack a little bit later — from the perspective of artificial intelligence. The notion is that artificial intelligence will replicate essentially what a human being can do. If you actually look at the success of our species according to Joe Henrik (the anthropologist) and Michael Muthukrishna (who has this great book called A Theory of Everyone) – those folks in social science who have studied this realize our intelligence is at a higher level. Our ability to learn from each other, to store information, is a function of our social learning and our collective intelligence in a way that individual intelligence does kind of a disservice to some degree. Now, I'm not saying that people don't vary in terms of their intelligence, but what would happen if we moved from this notion of IQ (intelligence quotient). I can give you a test to determine your speed of processing, information, and logical reasoning and move to this idea of QI (quorum intelligence). An idea, again, not well formed. But this idea of quorum intelligence, is where I'm trying to measure your access to wider networks of intelligence. So that when I give you a test, what I'm actually testing is the intelligence of the network that you are in. That in many ways reflects more your capability; your upbringing; the people you spend the most time with; the resources you've been given. Instead of saying, "Spencer Greenberg is a really, really intelligent guy", what I might say is, "Spencer Greenberg has a high QI, clearly has access to watts of collective intelligence". I don't know exactly how it would work, because our cultural categories are so firm in terms of placing the individual at the forefront. How can we code that in the Constitution, right? The individual's right for happiness. Okay, got it, the pursuit of happiness and liberty. If we step back and say, "Well, we're not going to measure IQ", the individual would look at how to measure intelligence at a network level. That has some really interesting implications for how we would have done AI differently. Rather than trying to replicate an individual chess player's performance, we would instead be trying to figure out the collective intelligence that produced that chess player. That's a different kind of system. How would you create that boundary? I don't know. Karl Friston talks about a Markov blanket, which is how you bound things together in ways where they're interrelated and you can speak about them intelligently from an organism to a species. So far pretty philosophical. There's implications for how we would do AI, how we would try to govern AI, how we would try to improve people's individual intelligence. I would take a different approach if I measured QI rather than IQ, and my particular bone to pick is I would attribute contributions in a different way. Contributions meaning people who make radical leaps forward in terms of paradigm shifting new thoughts, innovations, etc. Their contribution would be recognized in the context of the much broader QI they were able to tap into and benefit from. That leads me to ask the question, can we measure QI in the modern world as a way to help us assess if AI is making us collectively smarter? Yes, AI is making me more effective in using ChatGPT — I can write things faster. But what I care about is the contribution of – is that actually improving the intelligence measured at a more collective level? Ultimately, that is the thing that's going to determine whether we survive or not. It's not going to be my individual idea — not that anybody even has individual ideas, right? Your ideas are largely synergistic, and syncretic, in the sense of lots of pieces put together and then called your idea. I don't think that's quite right. I realize this is wildly abstract and probably not helpful. This is the sort of thing that I'm pushing on, to think about how we would think about that, and does it have implications for AI development, architecture, and then ultimately, even AI governance?
[promo]
SPENCER: It's really interesting to think about how you would measure the intelligence of the network. If I think about this for myself, I'm going to ask how I would evaluate my QI. It feels like it's going to depend a lot on what I have access to. My QI if I have ChatGBT seems to be bigger than if I don't have it. My QI if I can post questions on Twitter, Facebook, and Crowdsource is higher than if I don't have access to those tools. It also depends on the people I know, and how willing I am to reach out to those people and get their advice. If I have a tendency not to actually use my network, then my network has no value. It feels like it depends on tools; my propensity to use those tools; my network; and my propensity to use the network. I mean, is that right? Is all of that wrapped up into the idea of QI?
ADAM: I'm sure it would have to be. And again, this is why, as a concept, it'll probably live briefly during this podcast and then die the death it deserves. I do think that's right. There's another part to it that I'd like to highlight — not just the QI operationalized in Spencer Greenberg's behavior, but also the contributions Spencer Greenberg now can make to QI through those resources. Using ChatGPT may improve your access to QI resources, but is it increasing your contribution back to QI? Are you making the network smarter because of that tool? I just think it is this element reaching back to scientific incentives. I don't care you got published, I care that you got it right and that we know collectively more than we did before. There are people who argue that part of the problem of the challenge of scientific reproducibility is that individuals are engaged now in this zero sum game of publication, tenure, and individual rewards. That is problematic, but if what I care about is your contribution to QI, then I would be more interested in you actually putting out fewer (but more impactful) effects. That would make us all smarter, and you've improved my QI. I would love to recognize that somehow, but I don't know how. As a final point, it would speak to another utopia — the notion of cognitive diversity that we all pay lip service to. In social science, judgment and decision making — sometimes these things are hard to replicate, and the goal is to try to make an effect go away in science. So that's interesting — let's see if we can make that go away. If we can't, then there's something real. And you know the role of cognitive diversity and things like forecasting and decision making, if harnessed appropriately, is an effect you can't make go away. On the whole, more diversity, if harnessed effectively, is better than less diversity. If I actually care about my QI, I will be really motivated to extend the diversity of my network, in the recognition that it is likely to improve my QI. That's why I spent more of my life than I probably should trying to push this forecasting platform, that is premised on this idea of diverse forecasters. When brought together, using algorithms and forecasting platforms, it really does give us stronger insights and better forecasting. I can make that effect go away, if I remove diversity. I can have lots of people who think the same way and we will lose that effect, in terms of forecasting. I'll just say this is probably not surprising, coming from social sciences. I don't know if that QI concept has been taken seriously enough in places like artificial intelligence.
SPENCER: When you talk about cognitive diversity – can you unpack that a little bit? We hear a lot about diversity in terms of different racial groups or different ethnic groups. We also hear about diversity of class, having people who grew up with different amounts of income and different kinds of opportunities. But what is cognitive diversity?
ADAM: I use that term that I've derived from folks in the literature as a way to highlight that part of you want from diversity. Economic and demographic diversity is not just you want more white, brown, or yellow folks at the table. It's because their experiences are likely to be different enough that you will have a cognitively diverse group. By that I mean how people think, in relation to what's going back to the QI. How they think, as a function of the networks they've been in. How they think, as a function of where they've been in the world. The getting back to these nuances. Everybody brings different nuances to the table, and that is shaping our cognition. What they attend to, how they process it, what it means to them, what they do with it. The associations they make in their heads with other things they've seen that I have no idea about. That's what I mean by cognitive diversity, because it isn't literally how their thinking is different. But that thinking is a function of the networks they grew up in and the QIs they've had access to. I think you're trying to tap into as many networks as possible to get that cognitive diversity. If I were a straightforward computer scientist person, I'd simply ask how is that different from ensemble modeling? And I'm not sure it is. Maybe it's very similar in that sense.
SPENCER: If I were to try to define it more technically, it sounds like they make different predictions from the crowd, but not because they're less good at predicting. If someone makes the same predictions as everyone else, then they're not adding any value to the system. Everyone else is already making those predictions. If they make different predictions from the crowd, but those are wrong, that also is not adding any value. It seems like the value comes from them being different predictions or generally just thinking differently, but in ways that are not less likely to be correct. They're correct in different cases.
ADAM: Two things come to mind as part of this conversation. The first is — that's why we moved past prediction markets where we focus on resulting, and instead many of the forecasting platforms now actually asked for rationales. Why are you making this prediction is just as informative as just the accuracy. Now, I want to listen to people who are accurate. But even if someone's inaccurate, the reason they're making that prediction is itself informative, because that is a signal that someone is attending to in the world that we wouldn't have known about potentially. I think we can do stuff with that. The second point I make is, you've got to play the long game when it comes to cognitive diversity. You can cultivate it, actually recruit it, but you also need to realize certain aspects of that network (of the QI) will be more wrong than others, under certain context or conditions. You won't know what that is if you don't have that track record of how that works. And by the way, people can get better at forecasting. It's an open question as to how much better they can get. What we do know is the QI may not be able to do much of your IQ or even your individual forecasting in this part. Having you in that mix though, I can improve the QI, because I can now weight forecasts based on your track record. That depends on you getting rewards for being part of that track record. If I'm only going to reward you for being right (AKA a prediction market), you will see attrition to some degree. So you do end up with a really interesting group of people playing on prediction markets, who are obviously very smart and are generally pretty at prediction. It's an open question of whether you'll get cognitive diversity out of that, for those reasons. So I'm a big fan of rationales. In fact, we're doing work now on how to take rationales and build out causal models of different groups of forecasters. People tend to make certain kinds of forecasts and in the rationales they provide, it's interesting to ask, how do they think the world works in relation to the prediction they are making. You can start seeing these camps of different kinds of causal models. This is early days, but push that forward and now we can use AI potentially to help us understand the best way to ensemble this cognitive diversity to make better forecasts and potentially spot new areas of innovation. If you look at James Evans at University of Chicago, who is using really big data patterns to try to identify white spaces in the innovative area. For example, where is nobody looking? Maybe it's because there's nothing there. But my suspicion is (much like James), it's because nobody has thought to look there or they haven't been incentivized to look there. Can we both leverage and cultivate cognitive diversity, and can we actually promote it by identifying areas we want to send people into? Or, in your case, Spencer, how can I improve your QI by finding an entirely new network, area, space, innovation, or research and attach it to you? And make that available in some sense?
SPENCER: You can imagine a prediction market system that specifically rewards people for adding knowledge to the network. Where if they make a prediction that they're right on, while others are not, they then can be rewarded more than if they make a prediction they are right on, and others also are also right on. I don't think typically prediction markets do that — I think you're not rewarded more for being right when others are wrong. Is that true?
ADAM: I think that's right. There's a lot of people who have ferocious debates about the value of straightforward prediction markets versus aggregated crowdsource forecasting. Crowdsource is sometimes considered a sort of polling in some sense. There's a ferocious debate to have there. I think generally your characterization is right. The other challenge is (and I think one of your guests previously mentioned this), is we have this visceral reaction to using things like prediction markets when it comes to intrinsic or sacred values, compared to instrumental things. If we want to use the prediction market to predict the weather in a couple weeks — great, we have no problem with that. If we start using prediction markets to predict things that are closer to cultural categories that we consider to be sacred or intrinsic values — things you shouldn't bet on — then there's going to be a lot of pushback. That pushback is not rational, but deeply cultural and visceral. In theory, we should want as many sources of information as we can get to make the right decisions. I actually think (if I pull on this further), that some of the pushback we get with things like forecasting in areas like government is because it challenges this cultural category of "I am the decision maker". You bring me information, I make decisions. If you actually start looking at the value of things like prediction market, collective intelligence, aggregated crowdsource forecasting, you begin to realize that asking lots of people for their opinion offends us in some way, even if they're demonstrably more accurate than you. Or offends some people. I'm proud to know a lot of people like Jason Matheny and other folks who have that epistemic humility and want all the information I can get when I am trying to make a good decision. I think there's a cultural category there that states I'm the decision maker and these are my source of information, anything outside of that is almost dirty. And I wonder to the degree to which aggregated crowdsource forecasting is kind of a dirt when it comes to decision making, because we valorize it. Again, the lone genius who puts all this stuff together him or herself and comes up with the right plan. We know demonstrably that forecasting approaches can be much more effective for making accurate forecasts, probably informed by greater diversity. Therefore, we can learn a lot more from that, than someone behind a table who states "well, this is what we're gonna do".
SPENCER: We've talked about different considerations when it comes to thinking about the future of AI. You're the director of the AI division of the Information Sciences Institute at USC. What approach are you taking there?
ADAM: If you look at the world around us, and full disclosure, I cheat on you a little bit. I listen to other podcasts — if you can forgive me for that. One I like is "Last Week in AI", and just from week to week, if you look at the tremendous engineering change, the evolution and acceleration of this technology, you will sit back and think, "I'm going to build a strategy for this?". A strategy seems futile, at best, and certainly likely to be wrong. But then I think, the AI division hired an anthropologist to be their director and that's weird, on the face of it. I think part of why I'm there is because the strategy that we need to start building up is this engagement of AI and social science, in a meaningful way. I see even at DARPA the thinking that sticking social scientists, computer scientists, and data scientists together will have goodness will come. That rarely is the case, because they reflect different cultures to some degree, and they don't have the same language. I think we have to get serious about this. My strategy at the moment is these three buckets for AI, specifically in the context of social science – defined more broadly as trying to bring science to understanding how humans and their created socio-techno systems operate, behave and ultimately can be improved. The first bucket is what we call AI Now, and this is the tool set that exists in the near term, to be used by social scientists and other people to tackle problems that are happening at the moment. That could include everything from, for example, ChatGPT as an assistant. Not just limited to ChatGPT — there's lots of interesting tools out there in the space like Elicit. That is AI Now — how do we empower social science to make these profound insights and ideally, come up with good solutions to hard problems with what we have now. But it's also obvious that AI, as it exists now, is not going to get us to where we really need to be. So the second bucket is AI Next which is, what is the AI we have to build to really put the science into social science. That would include AI that can do things at a collective level — how do we build social AI that can learn from us, and from each other? How do we build AI that understands causality, that can do inference? I think you touched earlier on this idea of metacognitive AI — how do you build an AI that figures out what's the right and wrong thing to do. That's a hard problem, obviously, because it's unlikely that it will emerge (and I've been wrong before) organically from the current approach of just data computing algorithms. There's a lot of work to be done in algorithms in that space. That's what we're thinking about with AI Next. The third bucket is AI in the Wild. Here it's about we have to be able to measure AI at a qualitative level, but also understand the meaning and nuance of what AI is doing for us and to us in the real world, as we deploy these things. In the absence of getting our hands around AI in the Wild, it's going to remain largely an engineering problem and I don't think that's going to turn out well. Those are the three areas I'm thinking about, in reference to things like AI safety, etc. I'm both very amazed and grateful to have the opportunity to be at a place like ISI where they have deep technical expertise. And also that I'm coming to them knowing the tremendous amount of talent and cognitive diversity that's outside of ISI, USC and frankly, even the country. I recognize that we have an opportunity to really up our game when it comes to QI — our collective intelligence. That's where we're trying to drive.
SPENCER: Adam, thank you so much for coming on. I really appreciate it.
ADAM: It was delightful Spencer, thanks. I don't know if I call it a bucket list because I hope to be around for a while, if the singularity is right. But being on this podcast is a real honor. I look forward to the future podcast because I don't learn as much as I do from any other podcast than this.
SPENCER: Thanks, Adam.
[outro]
JOSH: A listener asks: "It seems like a lot of the survey tools you use may have selection bias due to the participants because, for example, the surveys are online, they're about specific topics that certain people might be interested in, etc. Also, many of the quizzes ask deep or personal questions that I am not comfortable sharing my views on. As such, the people that do share may again be self-selecting. So do you agree or disagree with this? And how do you compensate for it if it's a problem?"
SPENCER: So when we collect data on Clearer Thinking, there's definitely a lot of selection bias because the sort of person that's interested in Clearer Thinking tends to be more reflective, more intellectual than average. They tend to live in big cities around the world like New York or London or Melbourne or San Francisco. So definitely an unusual group. But one really nice thing is that this is a group that we really care about learning about. So when we're drawing conclusions, they're applicable to the audience we're trying to reach. So it's a nice feedback loop in that sense: we can learn about our audience and then apply it to the same audience. So that's really great. When we run studies of the general public, we use our own platform called Positly, which you can use as well. And Positly — while the demographics are somewhat self-selected; it's not an actually representative sample — it's actually not a bad sample; it's actually quite diverse. It covers quite a lot of different sorts of people. And I think most of the time, if you get a conclusion from that, I think it will mostly generalize to the US population. I wouldn't use it for something like a prevalence estimate. If you want to know exactly what percentage of people have OCD, that's not the right use for that. But if you want to learn about what correlates with what, or what predicts what, I think it can be really useful for that. And I think usually it will generalize because I think for those kinds of inferences, you don't need a purely national representative sample, you just need a diverse sample and then usually you'll get the right answer.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: