CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 055: Rationality and Cognitive Science (with Anna Riedl)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

July 2, 2021

What is the Great Rationality Debate? What are axiomatic rationality and ecological rationality? How irrational are people anyway? What's the connection between rationality and wisdom? What are some of the paradigms in cognitive science? Why do visual representations of information often communicate their meaning much more effectively than other kinds of representations?

Anna Riedl is a cognitive scientist with a primary research interest in judgement and decision-making under unmeasurable uncertainty, a field in the intersection of psychology, neuroscience, and artificial intelligence. She loves the scientific method so much that she regularly spreads her joy about it in various formats of science communication. In the end, she cares about ideas being applied in the real world, solving problems, and benefitting humanity. This means she often plays the role of being an interface between the two worlds of ideas and their application by humans. Over the last years, she has founded and lead different organizations in the DACH region that work on improving the world. You can find more about her at riedlanna.com, follow her on Twitter at @annaleptikon, or email her at annariedl.office@gmail.com.

Further reading:

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Anna Riedl about unifying sides of the great rationality debate, the development of insight and self-understanding, procedural knowledge and relevance realization, and visualizations of cognitive science.

SPENCER: Anna, welcome, it's great to have you here.

ANNA: Very glad to be here, Spencer.

SPENCER: So the first topic I want to talk to you about is the great rationality debate. What is the great rationality debate?

ANNA: When it comes to the great rationality debate, I'm often confused that many people have never actually heard about it at all. But in the whole topic of rationality, there are a couple of basically tribes that disagree with one another. And two of these tribes are kind of the axiomatic approach to rationality and to the ecological approach to rationality. One of those two are Kahneman and Tversky, and with the other, most people who'd know about it associate Gerd Gigerenzer.

SPENCER: Great. How would you describe the two positions of each of those tribes?

ANNA: They're both quite complex but the main difference is that the axiomatic approach really builds heavily on the axioms of rationality by Neumann and Morgenstern. So if someone follows the axioms of rationality, then they will behave as if they were maximizing utility. While ecological rationality wonders more about adaptation to the environment directly without measuring this adapting by a kind of following certain axioms.

SPENCER: Right. The axiomatic view, as I understand it, basically says, "If we are trying to design a perfectly rational agent, we can say that there are certain things that should be true about the way they make decisions." For example, if they like A more than B, and they like B more than C, they should like A more than C, right? If they didn't, that would be a weird contradiction in their choices. That's just one example, but there's a bunch of these axioms of what it means to be rational that we can define. Then we want to measure human rationality against this sort of axiomatic system. Is that a good way to describe it?

ANNA: Yes, perfect. What you described is the axiom of transitivity, and the main idea really is, we cannot really ever say whether an organism or agent maximizes utility, because you would have to know so much about the goals, about the constraints, and so forth. So it's basically impossible. Because we can't say directly, an axiomatic approach uses these axioms to assume if one follows those axioms, then one maximizes utility. So, that's the main idea there.

SPENCER: And just to clarify here, when we're talking about maximizing utility, we don't mean utility in the sense of happiness or well-being or something like that. We just mean that there's some way to describe the agent's goals as a utility function which is basically how good it thinks different states of the world are. Then we're talking about the agent being an agent that maximizes utility in that sense, right?

ANNA: Exactly, yeah, very well said. One point I want to make is: so officially, there's one great rationality debate, but what I find really interesting is that it's basically still continuing. There was one publication in 2002 by Tetlock and Mellers, where they summarized the great rationality debate that was at the end of the 20th century, but there are still ongoing debates. So even three years ago or two years ago, there was a big publication, Big Debate [laughs] published as mind rationality and cognition, where even more people turned in. One might think it's kind of this historical debate [laughs] in itself but it's not at all. And that's really what excites me so much about it, because it's still kind of this melting pot, where artificial intelligence, economic psychology, philosophy really [laughs] have to figure out very basic assumptions and what means what. I think it's a really good place to test assumptions.

SPENCER: So, let's talk a little bit more about what the ecological rationality view is. Is the idea essentially that it's wrong to think about rationality with regard to this kind of formal axioms of rational agent. Instead, we should talk about rationality in terms of getting the job done in the real world like, "Okay, you're a creature. You're trying to survive in a certain environment and being rational is about actually succeeding at that or succeeding at your goals in the actual environment in which you exist?"

ANNA: Yeah, I think that's very well said. Underneath those two approaches, there's also a different conception of the size of the gap to rationality. So one is a bit associated according to the standard which is the Milliers view, and one with the Panglossian view. So one side assumes there is this normative ideal, and there's quite a big gap and we could overcome it potentially (that's where the biases come in). And the others are more on the Panglossian side, assuming they are in the best of all possible worlds and the gap is not really that big. But it's not completely the same. It's not completely one side and the other side [laughs] believes in the other kind of assumption, but it seems to be highly correlated.

SPENCER: So, what is the debate, really? Is the debate about which of these best describes humans? Or is the debate about which of these should we be trying to achieve like normatively? What should we be aiming for? Or how would you describe what's actually being debated?

ANNA: Very good question. I think the main thing that's heavily associated with the debate is this question, "Are we biased, or our kind of the heuristics the best we got?" I think that's often at the core of many things. But there are many, many other points that are also very interrelated.

SPENCER: Yeah, I find this kind of weird, because what would the argument be that our heuristics or our biases are the best they could possibly be? That just seems like a very strange thing to argue. Is one side actually making that claim?

ANNA: I think not explicitly in that sense. But there's the question of, for example, new ideas that came in are this idea of computational rationality. Usually, when you have the question of basically deciding between system two and system one, you ask yourself whether to think longer about a question. Then you have the speed-accuracy trade-off. But when you are embedded in the real world, of course, computation also has a cost. You also have opportunity costs in the real world while making the decision. And often, a very fast decision is already boundedly optimal.

SPENCER: It seems like part of this debate is about how we actually do things in the real world? "Do we do them by systematically thinking about them and estimating probabilities, and saying, Oh, there's a 90% chance of this, 10% chance of that?" Or do we do them by this more intuitive process that just leads us to the right answer through a series of fast heuristics that we don't necessarily even have awareness of?

ANNA: Yes, but it goes deeper. So I think that you can make examples of investing where some strategies can outperform easier strategies in specific circumstances. But overall, often simpler solutions are actually, on average, outperforming the more complex ones, because they're more robust to the uncertainty in the environment.

SPENCER: Interesting. Is the idea there that these more complex solutions are kind of overfitting to the noise or something like that?

ANNA: I think that will be a reasonable analogy.

SPENCER: Whereas a simple heuristic is designed to be applied in lots of different situations and not worry too much about the exact details of it.

ANNA: Yeah. I think I have a bit of a problem right now with knowing too much about it. So I always feel [laughs] like any summary misses out on so many other parts. But I think that is one central point, Yes.

SPENCER: Got it. I think one of the classic examples that comes up in this debate is taking a problem like the base rate fallacy, where if you say to someone like: Okay, imagine you're doing a random cancer screening. Someone comes to the doctor's office, and they get screened for cancer, and the test for cancer is really accurate. Let's say, it's 99% accurate. So that if you have cancer, there is a 99% chance to say you do. If you don't have cancer, 99% chance, let's say, you don't, and then you do this random screening. So a random person comes in, you screen them for cancer, and the test says they do. But let's add the caveat, this is actually a really, really rare cancer. So maybe only one in 10 million people in the population actually have this kind of cancer. A lot of people's initial impulse is to say, "Well, because it has 99% accuracy, it's very likely this person actually does have the cancer." But then if you actually do the Bayesian calculation, where you say, "Okay, there's a prior that there's only one in 10 million chance that the person has it, because it's a random screening. It's a very rare form of cancer." And then you look at the fact that the test is 99% accurate, so it actually makes a mistake, like one out of 100 times. It actually turns out that it's much more probable that the person doesn't have the cancer than they do. Because those one out of 100 mistakes actually occur much more often than that one out of 10 million prior. So basically, this is a classic example where people's intuitions used to give the wrong answer. But, as I understand it, by tweaking the example a little bit, they were able to make it so that people actually do get this problem right much more accurately. Instead of presenting the information probabilistically, they can present it as: Out of 10 million people, this many people have the cancer, the test would tell you that this many out of that many would have it correctly, and so on. So basically by restructuring the information, they're able to get people to get this answer much more often. I guess the argument there is that it's actually a much more realistic way to present the information based on real world situations, instead of putting in terms of his very abstract probabilities. Any thoughts or reactions to that?

ANNA: I think that is correct. [laughs] I think that's one of the examples they gave, like that humans in general are not as biased as one might think.

SPENCER: So if you present the probability information, people get the answer horribly wrong. You might say, "Ah, humans are extremely biased, or these kinds of devices." But if you've restructured information in a way that maybe is arguably more realistic and more natural the way the human tends to think about things, then actually we're much more accurate. So I'm interested because you've dug into this a lot. Where do you currently stand the great rationality debate? Do you take one side or the other? Or what's your perspective on the debate overall?

ANNA: That's the perfect question, because that's what I'm actually writing about. So my current work is on trying to unify them. My argument there is that they are approaching each other from different ends, but they really do not really disagree about facts in the world, but they have very different perspectives. So I think both sides are very valuable, they're not reducible, and one cannot be reduced to the other approach, but they are really not disagreeing about facts in the world; they just have very different approaches in how they abstract from the same phenomenon.

SPENCER: Interesting. Can you say something about how you might bridge the divide between them?

ANNA: So on one side, we have this axiomatic rationality, where you have this assumption of perfect rationality. And then you can apply a lot of further ideas we know about the boundedness of rationality, to move closer to where heuristics are kind of the optimum already. So first of all, the idea of, as I said, computational rationality. So factoring in both meta-rational calculation of how useful further computation really is, and then the expected value of thinking longer drops very quickly because of the opportunity cost in the real world. So faster reactions very quickly can become the optimal. This whole approach is currently researched by Lieder, and it's called Resource Rational Analysis, and he shows that many of the apparent biases really are an optimal speed-accuracy trade-off.

SPENCER: That's super interesting. I think one that I struggle with regarding that view — I haven't looked into that work you're referring to, so it might be just a misunderstanding, but — there seems like there are plenty of situations where you actually have plenty of time to think about them. And yet, even when we have lots of time, we don't seem to be able to think about them very accurately. For example, sunk cost fallacy, where people might have lots and lots and lots of time to think about quitting some project that's actually not very valuable, and yet they still feel stuck in the project and they don't want to give up. Because if they give up on it, they'll suddenly feel like they've wasted all that previous investment, even though the investment is gone no matter what, whether they stick with it or give up. So that's just one example where it seems like we have unlimited time and yet, we still really struggle to do the right thing sometimes.

ANNA: Yes. So with the example of sunk cost fallacy, going through all the material, gave me the impression that even things like this might not straightforwardly be biased, because we are radically dependent on the structure of the environment, and how we are embedded in our life. So how I've chosen things before, might actually have meaningful constraints for how I continue to act. Because maybe, if I've already actually invested years into something — and there's so much uncertainty, whether it will turn out great or not, which I often really don't know that — it just might make sense to stick to it. So I'm not saying this is always the case, but I think it's not always a fallacy to me. So having read all this material, about a lot of non-trivial points and assumptions that are being made, even relatively straightforward seeming fallacies, like the sunk cost fallacy, might actually have some usefulness, given that we are embedded in this very rich and very uncertain life. So having already worked for years for a project, and assuming it might not go well in the future, but also not knowing might still mean that it is more useful to continue to work on it. I think we often just really don't know. And given that life radically depends on what you've done before, it might be useful. I'm not saying I think it's never a fallacy. But many of those very obvious biases no longer seem really obvious to me.

SPENCER: That's really interesting. I guess my current thinking on this, I'm a bit more on the kind of Kahneman side of this, I think. I do think the human brain actually does irrational things a lot, like really, really commonly. But that being said, I also see significant value coming from the other side basically saying that, "Okay, we might be making mistakes of certain kinds, but the mistakes are not always actually that serious in the real world decision-making context." In other words, just because we can come up with something in a lab, where someone does something really silly, doesn't mean that they would actually do something that's silly in the real world decision-making context when the information is structured differently, and it's much more realistic. I think a really nice example of this is this little rationality puzzle, where they asked you, which of these cards would you need to turn over to verify that this rule is met? And they have some rules like when there's a triangle on one side of the card, there's a square on the other side, and you have to decide what cards to turn over (and people are not very good at this). But when they rephrase the question — in terms of checking people's ID to see who can be led into a bar, and the question is actually identical structurally; it's actually the same exact information, but it's just rephrased in terms of a real world context of checking people's ID — suddenly, people improve accuracy dramatically. And I think that's a good example that shows that a silly mistake in a lab does not necessarily mean a silly mistake in a structurally equivalent real world context. But that being said, I just feel we humans, all of us, just constantly make mistakes in our thinking. I'm wondering, just on a gut level, do you feel like that's true? Or are you not so sure?

ANNA: I think by now, I'm agnostic about that question. I think we definitely make mistakes in domain-specific areas. But I'm not sure whether on a more general level, you could really improve rationality. What I mean with scientific perspectivism, is that I think both sides are extremely powerful as tools for discovery. And I'm not really agreeing with one or the other, I just think they have one way of abstracting from the topic and then doing research on the topic, and I think we really need both. The question whether humans are rational or not is more a result of the basic assumptions you make. So whether you believe in the axioms of choice or whether you go with the more ecological approach.

SPENCER: I guess, the way I think about this — it might be a little bit different than both of the approaches, which is, I think about — are people taking actions where they actually end up leading to consequences that are bad for them and they could have done better? Like, at the moment, when they took the action, there was enough information available that they could have been able to process given (let's say) the amount of time and what was known, that could have led them to do better according to their own values. So in other words, by their own way of assessing how good a decision was, they could have done better than they actually did. I think I see a lot of situations where that kind of mistake is occurring, where in retrospect, they're like, "Oh, man, I should have actually done this, I should have known that I should have done this."

ANNA: Yeah, I agree with that part.

SPENCER: So I guess to me, that's the way I prefer to look at rationality: rather than looking at it with regard to a perfectly rational hypothetical agent, but also preferring that approach to looking at it in terms of the ecological view of whatever causes you to survive. If a rabbit is able to survive in a particular environment, then the rabbit is acting rational. I don't know, is the view that I am giving different from both of the other two? Or is it somehow synthesized into one of those views?

ANNA: I think it's slightly oversimplifying it. I think the main claim by the ecological approach is really just adaptive fit to the environment. So it's not just about survival, but just what is useful, instead of following certain axioms.

SPENCER: I get this funny thing where sometimes you're talking to people that seem to believe that humans are actually acting really rationally. And yet, they don't seem to believe that the individual people they know are acting rationally. They don't think their friends are acting rationally. They don't think their colleagues are acting rationally. They also realize that they themselves are not acting rationally, yet they somehow believe that humans are acting rationally. I find it very baffling.

ANNA: Yeah, I mean, that's one of the underlying assumptions there. You kind of assume all agents or all humans to be the same. But of course, there's research by, for example, Stanovich in which really the individual differences in human decision-making, which is also part of the axiomatic approach.

[promo]

SPENCER: So my understanding is that there's some connection between acting rationally and wisdom. Do you want to talk about what that is?

ANNA: Sure. So I stumbled about some research on wisdom — and usually, as a person, I wouldn't be so drawn to that because it sounds kind of esoteric, but there is some work by John Vervaeke — that describes wisdom as some form of meta-rationality. So what he says is basically, "Okay, that cannot really be a theory of fitness, because fitness radically changes depending on the environment. So there can only be a theory of evolution, namely, natural selection." And he applies the same to the question of rationality. So instead of asking what properties rationality has, he asks, "What is the process by which one becomes more rational?" And that's kind of his concept of wisdom.

SPENCER: So then wisdom is the process by which we become increasingly rational?

ANNA: Yes.

SPENCER: Interesting. It's a fascinating view on it. What do you get out of that way of looking at wisdom?

ANNA: So the main point he makes is, of course, you need self-reflection. Given that, you can transform seemingly unsolvable problems, that are completely intractable through your action, into simple solutions that you suddenly can act on. And what you need for that is what he says 'the insight'.

SPENCER: Yeah. So this reminds me of — I wonder if this is actually based on the same paper, but you pointed me to a paper about wisdom and I thought was super cool and the way that I recall it, or that maybe it's a little bit compressed at this point in my mind — wisdom is about understanding the structure of the world well enough that you can make choices or advise people on choices that help them get to better outcomes, basically. Do you feel like that is the same thing you're talking about? Or is it different?

ANNA: That sounds like rationality to me, like, what is true and what to do. [laughs]

SPENCER: Got it. Interesting. Okay. So you view that as basically the same as how do you become more rational?

ANNA: No, what you just said, what is the structure of the world and how would you act to live your values, that's the definition of rationality. What is true and what to do. Basically, the only game there is, what other questions would we even need? But wisdom describes — as for him, I understand more or less what I took away from it — as this process by which you make yourself more rational, more capable of reaching your goals in the world.

SPENCER: Yeah. One thing I like about this view is it seems to align well with what you might think of as a really wise person that you might go to consult. You go to the wisest person you know, and you say, "Okay, I'm dealing with this difficult situation in my life, what do you think of it?" And they have this kind of internal causal model of the way the world works, and they're taking what you told them about your situation, and they're plugging it into this causal model that they have internally. A lot of it's intuitive, it's not like they have it all in their slow reflective system, too. A lot of it sort of just an internal, intuitive causal model they have. And from that, they're able to imagine manipulating different variables like, "Well, what if you were to do that, what would happen? What if you do that, what would happen?" And then their advice is essentially the prediction of their causal models like, "You should consider doing this thing, because maybe that will actually lead to the best outcomes causally."

ANNA: So when I think of a wise person, I feel like they are more calm, and they have really way stronger models where they have to run around less and calmly pick out the single most powerful action. That's how I understand the process where you constantly have conceptual changes of where you have to put your attention towards and what little actions actually make a difference.

SPENCER: That makes sense. I think one thing that this doesn't seem to capture about wisdom — that seems to me is often implicit when we're talking about someone being wise — it seems like the wise person often is not directly solving your problem. They're not just, "Oh, do x and then your problem will be solved." It's more like they're giving you the right things to reflect on so that you can understand your problem or they're giving you a process by which you can arrive at a solution or something like that. So it's like 'don't just give a man a fish, like teach a man how to fish'.

ANNA: It's just that kind of meta wisdom, not just giving the wisdom but helping the other to become more wise themselves.

SPENCER: Yeah, exactly. It's like not just giving them the output of your causal model, but handing them parts of your causal model that they can then work with, or give them a process by which they can develop parts of your own causal model or something.

ANNA: Yes. Vervaeke raised this term about overcoming foolishness. I think what I strongly associate is really this idea of you made a mistake, and then you reflect about it, and you have this aha moment and realize what you did wrong, then you change and the next time you can do it better.

SPENCER: Yeah. Well, that also seems like part of why advice is often not helpful is two things: One, oftentimes the person you're talking to doesn't have enough context to really understand deeply what the problem is. And I think that's one thing that separates a really, really good advice giver from the less good one is that the really good advice givers invest a lot of time to really understand the nature of the problem you have. But then even that's not enough, because then they have to actually understand the world well enough to know what to do with that information once they have it. So there's those two pieces. But then the third piece is that a lot of times — unless we've actually made the mistake, and have learned the thing for yourself — even if someone gives you advice, if you don't have the right import, it's like you could spend all day long reading advice for how to run a startup well, but if you haven't lived through any of that, and you haven't had any of the problems that that advice is based on, it might be actually really hard to take that advice. So I think this is why advice can often feel cheap, because until you have the life experience, the advice is just like these words. But you have to actually feel them on a visceral level, you can't just hear them.

ANNA: Yes, exactly. He also describes expertise as sophisticated procedural knowledge. Because, as you say, just abstract information is not the same as really having this procedural knowledge of how to act.

SPENCER: It's very rare that the advice we need involves just telling us a fact about the world or something like that, right?

ANNA: Yes. Two really good examples to get what he means with that. So one is the nine dot problem — which you might have heard of, or might have seen before — where you have these nine dots in the shape of a square, and you have a job to strike through all of them before lines. Then you try around, and you never really make it. And you just really wonder why can't I do it. [laughs] Then the solution is that you are allowed to draw outside of the box they form, where the term thinking outside the box comes from. Suddenly you can make this longer diagonal line and strike through many of them and then actually suddenly combine them. It's so interesting, because you just implicitly assume you are not allowed to leave the square and then you cannot combine all of them. But once you've understood you are allowed to and it was never explicitly said that you're not allowed to, then it's super easy.

SPENCER: It's such a great point. Because a lot of times we're constrained without even realizing we're constrained. We're considering a set of actions and somehow we've already limited almost every possible action in the world, and now we're only considering two or three. And then the wise person might be able to say, "Hey, you know, there's this other action, or maybe these other 10 actions you haven't even thought about. Maybe you need to ponder those a little bit." So they're kind of removing artificial restrictions. I love the example you gave, because in that puzzle, nobody ever says you can't draw outside the box. Yet somehow, people assume that you're not allowed to draw outside the box and we assume it's part of the problem.

ANNA: Yes, regarding invisible walls that you have to either learn are there or realize they are not there, that's also a very powerful analogy. I once was on a walk with a friend, and there were those flowerpots standing around, and I didn't walk between them, because I knew they were meant to be a boundary to a place that other people didn't want me to go around a restaurant. So I kind of wanted to go around them, and then the friend just went in between them. And it was like, "Right. You can walk through it, right? Because it was just physical, it was possible." But I had the super strong concept of theory of mind in me where it was just "Okay, I know, it is meant as a boundary. So I also perceived that in this way." But of course, physically, it was completely possible from a first principle to walk through it. And I think that's a good analogy, again, for a lot of other ways in life.

SPENCER: I always think it's really cool when people design their own life from scratch. When you meet someone, you're like, "Wow, you just decided what you want your life to be like, and it's totally [laughs] different from almost everyone's life." And I always think that's really fascinating. One thing I was just gonna say is that, it seems to me like there's this weird contradiction, where we kind of over and underestimate humans simultaneously along this dimension of how different people are from each other. So on the one hand, I think people's minds are much, much more different from each other than we realize. And we tend to assume everyone's minds are much more like our own than they really are — this is the idea of the typical mind fallacy. For almost any trait you can imagine, people actually differ on that trait and their internal experiences can be wildly different. On the other hand, there's also this sense in which humans copy each other to such a high degree that this sort of space of different actions you can take is way more constrained than you might imagine, with way more people just doing the same thing that everyone else is doing, not even considering any options outside of that. So you ended up with this: On the one hand, the internal experiences of people are wildly different. On the other hand, the external behavior is maybe much more similar than it could be in theory, and there's many more opportunities on the table than like almost anyone is even considering.

ANNA: Yeah, connecting this back to the great rationality debate. There is really good work by Mir Chalcolithic, from the Santa Fe Institute, about how social reality and social networks, to some degree, constrain but also predict what is even rational to do, and how given that the social reality matters a lot to what helps us with our goals, it seems like humans are indeed very well-adapted to their partially social environment. So for example, there's of course a trade off in believing 'what is true' versus believing what the people around you think. So yeah, I agree with you regarding the constraints given by the social environment.

SPENCER: Right. And there were two aspects to that: One is that we, just by nature, tend to copy people; that seems like a fundamental part of what humans do. Everything from little children copying their parents to learn skills, but adults also copying each other to a really strong degree. Then there's a separate thing, which is that social forces create incentives. So once everyone's doing something a certain way, there are now incentives around that you might get benefits from doing things similarly, or you might get punished by doing things differently, and so on. So the incentive gradient of the social forces is very real and important to take into account.

ANNA: Yeah, I agree as well. So one concept I really liked there is to be very explicitly aware about social norms or social normative concepts, to then more deliberately and intentionally choose from first principles what you really want. But also being aware that questioning certain norms and behaving against them, of course, comes with costs.

SPENCER: Like your example of walking between the flowerpots. On the one hand, that was a physically allowable action and your brain hadn't even really allowed you to consider it. On the other hand, it might actually annoy the people who own the restaurant, if you walk between the flowerpots. So you have to weigh those two considerations, right?

ANNA: Yes. And this was, of course, a peaceful environment in a [laughs] western city. But in a more uncertain space, there could even be a danger when walking through there — when it's private property, and you don't know how [laughs] much people care about their space not being walked on.

SPENCER: Right. If there's some valuable reason to walk through the flowerpots, which maybe there wasn't in this case, but if there was, you'd at least want to be able to consider that action, right? You'd want it to be on the table. But you'd also want to take into account the potential social repercussions of it.

ANNA: Yes. [laughs] But then, of course, you easily have a combinatorial explosion, and suddenly, everything is NP-hard. [laughs]

SPENCER: It's like our brains can only deal with a relatively small amount of variation. So you can't consider all of the actions and sort of about how ruthlessly do you want your brain to narrow down the space of actions before you even have them appear in conscious awareness.

ANNA: This is actually another topic that John Vervaeke is doing research on, I think it's even mentioned in the same paper. He calls it relevance realization. So, the whole process of what to even focus on what to cut out, because computationally, this is extremely difficult. There's this text about a robot, but you have to, I think, at this example of describing the robot, what it should do, and then with every step, telling it what to ignore [laughs] which is really, really difficult. So, Vervaeke addresses this as this emergent process where you constantly have your goals in mind, and while interacting with the environment, you learn what is relevant for achieving your goals (but it's really a process). So actually, it's very connected, because otherwise you do have this combinatorial explosion.

SPENCER: Right. And clearly, our minds are filtering out almost everything. By the time things get to conscious awareness, there's only this tiny little bit left. At a very simple level, you think about all of these pixels of information of light bouncing off of things and hitting your eye. By the time you are aware of things, you're like, "Oh, there's a lamp and there's a tree." You're not like, "Oh, there's a tiny little speck of yellow," right?

ANNA: Exactly. This is actually the more modern great rationality debate that's currently ongoing because the main topic there is the question of perception. Because a lot of the older models of rationality assume this all-seeing 'eye'. So it implicitly assumes we're omniscient and we see everything and you have to stay the incoming and then you update accordingly. But of course, our perception is highly dependent on what we need to see to reach our goals, and it's very dependent on all of that.

SPENCER: So was there another aspect of that wisdom paper you wanted to mention?

ANNA: Yes. So I said there are two examples, and I mentioned the nine dot problem. But again, for the process of making an intractable problem tractable with an insight, there is another very nice example. It's nicer to see it visually, but imagine a chessboard — and the chessboard, as you know, has 64 fields — and now it's covered in 32 dominoes stones. So those stones cover exactly two fields of the chessboard. And now, the question is, if I now remove two fields of the chessboard, namely the ones that are on diagonally opposite ends, can I still completely cover the chessboard in now 31 domino stones? Or would it look over? Would something be missing? Or is it still possible to completely cover it?

SPENCER: Right, so each domino covers two of the squares, and you're moving, let's say, upper left and bottom right corner squares of the chessboard. And you're like, can I still cover it in dominoes where every square is covered?

ANNA: So what happens now usually, is that people start to move pieces around in their head and just try, "Okay, I would lay them like this, this, this and this, or like this. Would I not be able to cover it?" And usually it starts to get too complicated to keep all the positions in your head, and they're not really sure. But yeah, that's really how people would approach it. How would you do it? Would you do it the same?

SPENCER: Well, it's a little unfair. Both because I've heard some more problems and because I'm a mathematician.

ANNA: Yes. [laughs]

SPENCER: But as a mathematician, like we're trying to abstract problems away, and we're trying to say, what is really the core thing underlying this? Who cares that it's a chessboard? That's irrelevant. What's the real thing here? And I think here is like a parody argument about every time you place a piece, you're kind of keeping the parody, you're always covering one white square one black square, right?

ANNA: Exactly. So this is exactly the solution, which means you have this insight in the underlying structure of the problem, which then helps you to come to a very easy solution. Namely, when you remove diagonally opposite pieces, they are in the same color. As every domino stone covers two pieces, which on a chessboard are the opposite colors, then removing two pieces of the same color means you can no longer cover the board with domino stones.

SPENCER: Yep. And what's cool about that is, not only does it give you the answer quickly, but also generalizes better. Because now you can understand something about "Oh, now I see there's a whole bunch of pairs I could remove that would make this impossible." This principle is much broader than just this one example.

ANNA: Yes. So now you've simplified a very intractable seeming problem by having a deep insight. Now, you can very quickly give an answer and it just seems very straightforward. This is really the idea of wisdom, as Vervaeke describes it. You have this insight, and by having it you have this conceptual change, and suddenly you can act in a more powerful way towards a certain goal.

SPENCER: So what's the generalization of this principle, though? Is it that you need to cut away most of the problem and get to the core aspects of it that actually matter in this particular case?

ANNA: Yes, exactly. That's the process of relevance realization. In an ongoing interaction with the environment, you get a feeling for which of the ideas or information or whatever really matters towards your goals. So instead of just having blind actionism, you have a strong understanding, deep understanding of what just really is irrelevant towards your goals. And he really emphasizes this point about self-reflection and self-understanding. So to some degree, I would say the paper really says it really matters to know thyself.

SPENCER: Where does self-understanding come in?

ANNA: So regarding the epistemic rationality part, I would say, it's the idea of just seeing through illusion. Once you have made certain mistakes a couple of times or just misperceived certain things, and then got feedback, you can understand that this is the way you distort reality. And then you can see through it by having the insight into what you're doing.

[promo]

SPENCER: So, we've been talking a lot about rationality. But my understanding is that your focus is on cognitive science more broadly. Do you want to tell us a bit about how you got into cognitive science and what your journey there has been like?

ANNA: Absolutely. I had a very interesting journey into cognitive science, because I got there with a super arrogant mindset. So I had read all those cognitive psychology books before. What I assumed would happen is that I would just learn more of what I already knew (basically, I already knew everything). I went in there with this kind of mindset. But what I really had to learn is that thinking interdisciplinary, really means to be able to shift paradigms. And this means, to some degree, first break the perspective from which you're coming from. Because I continue to see all the work through this super strong psychological lens that makes a lot of presumptions. Then I realized, "Okay, there's more to it," and humbly had to notice that I was not as smart as I thought.

SPENCER: [laughs] Well, I feel like when we only know one way of looking at a thing, it's really easy to be overconfident and to over rely on that and assume that that's a much more powerful theory than it is. As soon as we learn there's two ways of looking at a thing, that at least opens the door and being like, "Oh, okay, so this is not just a subtle question. There's actually multiple perspectives." And then maybe we're more open to the idea that like, "Okay, maybe there's a third and fourth and fifth way of looking at it."

ANNA: Yes, exactly. And what this did to me on a meta level is giving me some understanding in the cognitive science of science. So there's a book by Paul Thagard, called the "Cognitive Science of Science," and what I'm describing here is basically the cognitive science of cognitive science. And what he talks about is: we, of course, know there were a lot of scientific revolutions, like going from the geocentric to the heliocentric view of the world. And underlying, there's really a conceptual change. This is also what happens a lot in science education. So instead of just learning more of what you already knew before, you really change the underlying concepts of how you approach things.

SPENCER: I see. So the whole paradigm shifts, so even the way that you look at the problem changes.

ANNA: Exactly. So basically, coming into cognitive science was resulting from this completely different than assumed. So for me, it really was a course in philosophy of science. It was associated with a lot of paradigm shifts, and really, very viscerally experiencing how frustrating interdisciplinary communications can be. In the end, I go along with the definition of cognitive science as partially applied epistemology, because it's both the content of what you look at, but also to really understand different paradigms, you directly experience what it means to make strong shifts in how you view things or how you view what you know.

SPENCER: So can you just tell us about a few of the paradigms really briefly? What are some of the paradigms, and the kind of a really loose definition of them?

ANNA: I think one paradigm is important historically as existing before cognitive science, which is behaviorism, where you had ideas like psychophysics. So you would basically just measure input and output as in like a stimulus and the outcoming behavior, instead of making any assumptions about what's happening in the organism or in the human.

SPENCER: So this would be like B. F. Skinner?

ANNA: Exactly. So this is really before cognitive science. And then you had this paradigm shift to cognitivism, where you assume there's information processing going on, and we can also try to understand that, and it is useful to make those assumptions because it actually explains differences.

SPENCER: So rather than treating the human mind as like black box that takes input and produces output, the stuff in the middle doesn't matter; all that matters is the input and output mapping. It's like "No, no, but actually, there's interesting stuff happening [laughs] in between those two, and we can actually learn about those things." And it helps us make better predictions having a model of what's going on underneath, right?

ANNA: Yes, exactly. A couple of other paradigms, I'll just throw the names at you. For example, computationalism, connectionism, and body dynamicism. And then, for example, the 4E approach to cognition, which is like enactive, extended, embedded and embodied. In the beginning, I was really touchy about those terms because they seem very esoteric to me [chuckles]. But now, I understand that they really are partially just different perspectives on the same phenomena. So in cognitivism and also computationalism, you assume that the environment is there, and in the environment, there's this agent. Then the active approach, you assume way more the temporal aspect to be relevant because one thing just happens after the other. It's really this dynamical relationship between the organism and the environment and they're not really separate.

SPENCER: I see.

ANNA: Yeah, for example, with the extended approach, in general, just ask where should we draw the boundary when it comes to a mind? So it gets very philosophical: what even is a mind? And then for example, when let's say, I look at my smartphone, so "Okay, I will say this is not part of my mind, because it's outside." But if we would have the same thing, way smaller and directly connected to my brain in my skull, then we would say, "Okay, now it is kind of part of my mind." So, it tries to take this view, where you think more about what outside tools are part of your cognitive process.

SPENCER: Right. If your cognitive process involves thinking the following way, then type into a calculator, and then keep thinking, "well, now you've just kind of inserted this calculator, which is an external tool into this internal cognitive process." And so you really have enhanced your thinking in a certain way. So, you drew a diagram of the field of cognitive science, is that right?

ANNA: Yes. So, I have this map of the cognitive sciences or of cognitive science, and it's online on my website. It has also been hot on Reddit and on the front page of Hacker News.

SPENCER: Nice. That's awesome, I've checked it out. It's really, really cool. We'll put a link in the show notes. I definitely recommend people check that out. What was your goal with that map?

ANNA: So, I was coming towards the end of my master's degree in cognitive science. And I still felt like it does not make sense to me, really. So as a research project, I said, "Okay, I'll try to make sense out of cognitive science for myself." And to do so, I will try to be externally represented. Even just deciding on the dimensions of the visualization took really long. In the end, I decided to make it historical. So to have it, not just in ideal space, but grounded in just basically human history of the last 100 years. That's the x-axis. And then divided roughly by different disciplines, and then in bubbles representing the main publications. Under that I have kind of colorful blobs to show a bit the connection to the different paradigms.

SPENCER: So what do you feel are some of your takeaways? Or, having made this map, how do you think about the field differently?

ANNA: This is very difficult to put in words. So, one thing was all the different ways I considered to represent it that then did not really seem useful. For example, using the dimension of artificial versus biological systems seems not very useful, because the field really works with the isomorphisms between different fields. So, whether it is in a natural or an artificial system is not really a relevant dimension at all. So, I considered many thoughts to not be relevant. So a lot of insights that are not really on the map. But of course, also just some understanding of how different (for example) historical things, historical events, shaped the history of cognitive science, what the key researchers were, and then some major shifts were happening.

SPENCER: I really liked the process of trying to draw something that you feel like you understand, like try to do a visualization of it. I would like to do it more myself. Because I feel like forcing yourself through that process actually forces you to grapple with the kind of questions that you were talking about like, what are the relevant dimensions? What is the information I should be surfacing here? Because most of the information you can't surface, you just can't fit it all in one diagram. And so, you have to actually think, what are the important aspects of this that I want to show. Also, it's just a really powerful way of organizing your thinking on the topic, because it's very hard to keep in working memory and more than, let's say, seven, or maybe at most 10 items. But by putting it out in a diagram, you can have it all in front of you and way more information than you can actually store in your mind at once.

ANNA: Yeah, I wouldn't say the process of making such a visualization really is a good example of the extended mind [laughs]. Because you put it out there and then you use it as an addition to your own working memory, as you said. And then you can reflect back on it without having to keep it in your head, so that's very useful. But I think there's more useful parts to it. So for example, I think having this big external representation of my current understanding of cognitive science makes it way easier for experts in the field to point out my mistakes and where I have big gaps. Because if it was in a sequential communication, I could just say one belief at a time. And then they could say, "Oh, this doesn't seem right." But in this way, it's just way easier to really point out misconceptions.

SPENCER: That's another big advantage of trying to draw a diagram of your understanding of something is that you can show it to another person. It would be very hard for you to describe your understanding of cognitive science, but you could show them this picture. And then also, as you said, a lot easier for them to notice where you're wrong, or at least where they disagree with you. I think that's really cool. I think the process of making such a diagram is also such a powerful way to really consolidate your knowledge. There was this guy who went through different theories of anxiety and tried to draw diagrams of them. And I thought it was so cool, because in many cases, the original authors who'd come up with those theories had not drawn diagrams. So, he was trying to draw diagrams of theories that nobody had drawn diagrams for before. And in doing so, once you look at his diagrams, it's really fascinating because some of the theories just become way more intuitive, once you see the diagram. And others are the theories you're like, "Huh, that doesn't really seem to hold together very well, now that I'm actually looking at it in picture form." In words, maybe it kind of seemed like it held together. But once you draw a picture, you're like, "I don't really know about that."

ANNA: Yeah, that's a good example, as well. So I'm, in general, very excited about information design. It makes me happy from a purely aesthetic point of view, and because I feel like it gives me insight very quickly. But also, on a more societal level, I feel like there's just so much noise, and creation of information just becomes more and more important. And then really taking a lot of information and presenting it in a way that is very easy to process just is very, very valuable. So historically, there was this project by Otto and Movi Knoydart who really politically thought it would be kind of beauty of the government to make good representations of all the information so that the population can just vote better, because they're more educated, because just reading through a lot of material is a lot of work.

SPENCER: And I love that idea [laughs] of trying to do a public service by presenting information well. I feel like in practice, it's such a problem because, two things: One, if you're trying to make a cool looking visualization that can actually be at odds with presenting the information in the most useful and informative way. And I find that infographics often look cool, but if you actually think about it, they're not actually presenting information in a way it's easy to understand. Second, it's so easy in presenting information to sort of bias the answer based on what you want the answer to be. So, a really interesting example of this is there's this debate going back and forth about how much inequality has increased in the United States. And you have these different diagrams, some of them showing inequality going way up, others showing it being flat for like the last 50 years or not going up much. And it actually has to do with complicated decisions about how you treat, for example, health care paid for by an employer. How does that affect inequality and different things like this? So, actually a lot of choices in the way you present information that can lead to very different conclusions. And if you don't trust the person to be relatively unbiased in the preparation of information, it can actually mean that they're kind of swaying you one way or the other.

ANNA: I agree with that. But I think it doesn't really make the point against information design in general, but just again, says, "Okay, people are biased, or people have agendas." But I think the main point really is that visually represented information can just act completely differently on our senses than other formats can. There's this paper by Herbert Simon, which is called, "Why a diagram is sometimes worth 10,000 words," and there is a drawing of a mechanical device. The same information is represented, I think, in a code, which just describes the individual parts. So completely from an information theoretical field, it is equivalent, it's exactly the same information. But one is, of course, way closer to what we can process. So when you see the diagram and a little drawing, you can directly imagine yourself pulling the individual parts of the mechanism, and you can imagine how it would act. But the other information, you would have to first transform yourself (which is a lot of work) and that has to be considered in making materials.

SPENCER: Totally agree. I think most humans are extremely visual in the way we operate. For example, it's much easier to remember something that was like a vivid visual image than to remember a bunch of information written down, for most people. And being able to make use of that visual part of your brain often actually makes things much better, at least remembered, but possibly also just better understood. There's another thing about a visualization or diagram, which is that you kind of move your eye across it in a nonlinear way. Whereas, writing you have to process linearly word by word; whereas a diagram you can move up, you can move diagonally, you can look at two items, and then see how far apart they are. So I feel like it allows much more flexible processing of the information.

ANNA: Yeah, just text is sequential and you have to do it in a temporal fashion, while the other thing is spatial. And then often, for example, a connection between two points shows how big it is already gives you additional information about how important it is. So you can really go with the most important things first, and then go down. And regarding perception, what we really know from all the research by Kahneman and Tversky, and system one and two, is how close perception and understanding really are. Once you've really trained some formerly complex thinking to expertise, it becomes part of you directly perceiving it in your environment. So what we know from expert intuition in chess players is that they really see strong moves. They don't have to effortfully think about it, but just like the way they move their eyes already imply that they are really, on the board, already perceive what to do. So I think perception and understanding are very closely connected. And what I really like to have to drive home the point is just the saying, 'I see', which implies the whole idea.

SPENCER: Yeah, I was hearing this pretty good chess player talking about how people will always ask him, "How many moves deep do you see in the chessboard?" And he likes to tell them one [laughs], because a lot of the time he's just processing what's in front of him and the strength of different moves becomes apparent immediately. He doesn't have to think out 12 moves most of the time.

ANNA: Yes, exactly. So kind of all the played games are already integrated in those moves he will actually consider and then those are directly perceived on the board.

SPENCER: Anna, thanks so much for coming out. This is really fun.

ANNA: Thank you, too.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: