CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 053: Everyday Statistics and Climate Change Strategies (with Cassandra Xia)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

June 24, 2021

What are "shed" and "cake" projects? And how can you avoid "shed" projects? What is the "jobs to be done" framework? What is the "theory of change" framework? How can people use statistics (or statistical intuition) in everyday life? How accurate are climate change models? How much certainty do scientists have about climate change outcomes? What are some promising strategies for mitigating and reversing climate change?

Cassandra Xia (@CassandraXia) is the creator of Adventures in Cognitive Biases and co-founder of the non-profit Work on Climate. She is fascinated by how human biases affect the actions we take as a society and how to hack human psychology to get the change that we want. She is previously affiliated with the MIT Media Lab, MIT CS department, and Google AI. More of Cassandra's work can be found at cassandraxia.com and workonclimate.org.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Cassandra Xia about project duration and complexity, using interviews for user research, the value of statistics for everyday life, and climate change strategies. By the way, Cassandra is the very first guest to make a repeat appearance on our show. She was first on the show back in episode 13, with Hank Rosette where they talked about liberalism and conservatism. Anyway, here are Spencer and Cassandra.

SPENCER: Cassandra, thanks for coming on. It's really great to have you here.

CASSANDRA: Thanks for having me, Spencer. [laughs] You're always optimistic, so I'm excited to see what we're going to do today.

SPENCER: Me, too. The first topic I wanted to discuss with you is the idea of sheds versus cakes which is, I believe, a pair of terms that you coined. Can you tell us about what they mean?

CASSANDRA: Yeah, a shed is basically a project that has gone on for a really long time that you are no longer excited to work on, and won't amount to much.

SPENCER: Can you tell us where that word comes from, 'shed'?

CASSANDRA: 'Shed' comes from Reddit. I was browsing Reddit one day, and I saw that someone had spent nine years working on building a shed [laughs]. So you can picture it's gray — it's half shed, half greenhouse — but it was clearly someone who was quite intelligent, but then spent nine years working on this. I just read the whole post transfixed. It was a long saga about how everything went wrong and how he finally finished. It really resonated with me because I had a similar shed project — not an actual shed, but I worked on a side project for five years and finally finished it — so that's what a shed is. And then the opposite of a shed is a cake. A cake is the opposite of a shed because when you finish making a cake, everyone's very excited to eat it. It is also small, and it's time constrained. The longest that you can work on a cake is several days — if you're making a very complex wedding cake — and after that it starts to rot. So I coined the cake metaphor to remind myself to make cakes, not sheds, for future projects going forward.

SPENCER: So a shed is a project that has an indefinite scope that could just keep going and going and going, that you have to convince people to be interested in — you go to a party and you're like, "I've been working on the shed for nine years" — it's tough to get people excited about that. Whereas a cake is the opposite; it's inherently time bound, and furthermore, it's inherently exciting to people. You go around with your cake and it's like, "Oh, I want some of that cake," [laughs] you don't have to go around trying to persuade people to be excited about having a piece of cake.

CASSANDRA: Exactly, exactly. Bring cakes to parties, not your shed. [laughs]

SPENCER: Great. Do you want to tell us about your shed?

CASSANDRA: Well, it actually begins with how you and I met, Spencer. [laughs]

SPENCER: I met you because of your shed, actually. [laughs]

CASSANDRA: Yeah, although at that point, it wasn't quite a shed. [laughs] We met because of my master's thesis project. I was doing a master's at the MIT Media Lab and its very exploratory design art school. It's a two-year program and then, at the beginning of the second year, I realized, "Oh, my gosh, I have to graduate soon. I have to produce a project." So I spent a few months working on a story game called Adventures in Cognitive Biases. It's an adventure game where you learn about Bayesian statistics and overconfidence bias along the way. I produced this under great time pressure and shipped it to the world, and it was well-received by the Rationality and Hacker News communities.

SPENCER: When I saw this game online, I was like, "Holy crap! Who made this?" and I immediately contacted you until we became friends. [laughs]

CASSANDRA: Thank you so much for doing that [laughs]. Yeah, I was really embarrassed by this and after I graduated, I kept trying to redo this project.

SPENCER: Let's talk about what it was for a moment. It was a game that would teach Bayesian thinking. It's interactive and you'd have different challenges and you'd meet different characters, and you'd have to use Bayesian thinking and probability to solve problems. Is that right?

CASSANDRA: That's correct, yeah.

SPENCER: So what was so embarrassing about this?

CASSANDRA: Well, it was kind of clunky. I'm a programmer by training. My undergrad degree is in computer science, and the code is a giant spaghetti mess. It was my first JavaScript project. I had no idea what I was doing [laughs]. It was under intense time pressure that this thing shipped. And even if I tried to add extra modules to it, if I wanted to expand it, the way that I had written the story didn't easily allow me to do that. But also the code was a giant rat's nest and also did not let me do that. And I was thinking, "Oh, I'll just redo it and it'll be clean, and then the story will be extensible, and..."

SPENCER: What's fascinating to me about that is that those things you're pointing out that were embarrassing to you, are things that no user cares about. No user cares about how spaghetti your code is. No user cares about how easily extendable it is. [laughs]

CASSANDRA: I guess so. I guess a user may want extra modules. There are other cognitive biases I would have loved to address. But yeah, you're right. I guess I could have tacked it on.

SPENCER: Even with a page refresh, I doubt users would have cared that much. [laughs]

CASSANDRA: This is one of the things I really admire about you, Spencer, that you are able to ship things so consistently and on schedule. Yeah, what's your secret? How do you think about this?

SPENCER: Well, I'm not sure that's a fair compliment. [laughs] For example, I have one project that I made two years ago. It's up and shipped but, to be fair, I have shipped a lot of stuff. So, sometimes I feel like I get hung up on shipping things, but I really do aspire to ship things as soon as they're ready. But I guess the way I think about it is, you always need to be giving valuable data and feedback. And the intuition of 'ship as soon as possible' is a useful heuristic because it helps you get important data and feedback, but it's not the only way to get it. For example, let's say you're trying to create a new technology to solve a really big problem. It might actually take years and years and years to make a good enough technology to solve that problem. That's not out of the question at all. The danger is if, during those years and years and years, you're actually not getting tons of feedback from the world and you're just sitting in your garage building the thing, then there's a really good chance you're gonna build something that people don't actually care about or want. Whereas if you actually build in the feedback loops to get the valuable information all the time, then I think in some cases, you don't necessarily need to ship right away. In fact, for really big problems, the version 1.0 that you could ship in three months is probably not going to solve the problem.

CASSANDRA: I'm nodding so hard right now. [laughs] And yeah, that's kind of the punchline I learned after five years. [laughs]

SPENCER: So you create Adventures in Cognitive Biases and then you're like, "This is not good enough. I don't like it enough." And then what happens? How do you go from there to five years later?

CASSANDRA: Right. After Adventures in Cognitive Biases' first launch, there was a warm reception for it. Unfortunately, that's also when scope creep came in. That's a term that programmers know well, when your project keeps getting bigger and bigger [laughs], because it has to do more and more. So through Adventures in Cognitive Biases, I met some pretty cool people like yourself and Eliezer Yudkowsky from "Harry Potter and the Methods of Rationality." Eliezer and his partner Brianne were pretty excited about this and actually helped me do an Indiegogo campaign to do the next iteration of this. But then that's where I was like, "Wow! It has to be better than Adventures in Cognitive Biases, and it's gonna cover all these different things. In addition to everything we've covered before, it's also going to cover expected value and variance, and probability distributions and all the possible ways they can use probability in daily life." [laughs] Scope creep came in, especially because then I took people's money for this project and promised all these things.

SPENCER: That creates so much stress. Now you're like, "Oh, I have to deliver. I've taken their money."

CASSANDRA: Yeah, money, the root of all evil [laughs]. So over the course of a year, maybe I created three or four different iterations of this that I wasn't happy with and never shipped. And then I ran out of money and got a job at Google and kept working on this in my free time. Over the course of four years since Adventures in Cognitive Biases first launched, I made nine different iterations of totally different storylines.

SPENCER: Only took nine different storylines. [laughs]

CASSANDRA: Yeah, totally different, kind of from scratch. Each time, we're just like, "Chuck the previous things out of the window." [laughs]

SPENCER: I don't want to be tough on you, but could it be possible that the problem is not that it's not good enough, but that it's something about your own way of assessing your projects?

CASSANDRA: Possibly. [laughs]

SPENCER: I mean nine story lines.

CASSANDRA: Possibly. There were definitely things I could have done differently to design things more efficiently. And I would love to share those, once I finish telling my shed saga, because I guess that's why I'm kind of excited to be on air today, to share this failure and hopefully save other people from this path.

SPENCER: Well, I have to say, I think it's really wonderful to talk about failure because, first of all, people rarely talk about it. They're often embarrassed, even though everyone fails sometimes, and a lot of times we learn the most from our failures. And if we're trying to do hard things, I think we have to assume that we're gonna have many failures along the way. So I think this kind of sharing is super valuable. Thank you for that.

CASSANDRA: Thanks for creating a safe space to do that. So okay, nine different versions, and now it's been four years, and I'm working at Google, working on this in my free time. And I just realized, "Hey, I don't think I'm ever going to finish this project while I'm employed full-time at Google. I should just quit and finish this project." So I sent off a goodbye email, like "Goodbye. I'm going to go work on my passion project." And Google management stepped in and offered the opportunity to finish the project while working on it full-time for six months at Google.

SPENCER: Pretty awesome offer.

CASSANDRA: Yeah, really grateful. Thank you so much, all the people at Google that made that happen. So I had the opportunity to finish the project with advice from Google's top-notch design team and all other resources there. And then I learned that designers and product managers have lots of processes and frameworks for avoiding shed projects like this. [laughs]

SPENCER: Yeah, I'd love to hear some of what you learned about how to avoid sheds.

CASSANDRA: So many things. I'll just drop down some of these ideas, and we can go into them more in depth, but probably the ones that you've already alluded to: user-centered design, jobs-to-be-done framework, really iterating with a user, you can do cognitive walkthroughs with them. There's pretotyping, which is like pre-prototyping because even prototyping takes too long. So what can you do to get stuff in front of users more cheaply and faster.

SPENCER: Tell us about what is a cognitive walkthrough and what problem does it solve?

CASSANDRA: Yeah, the tool, the cognitive walkthrough, is probably my favorite technique. This comes from user experience research and allows you to really learn about users holistically. What you'll do is you'll show them something, and the something could be what you currently have, or it could be just sketches of your idea, or if you don't even have that, it could also be a competitor's product or offering. Basically, the point is, you show them something, and then you ask the user to share their stream of consciousness reaction to the thing that you're showing them. Any reaction, be it positive, negative, surprising, or even if they don't feel like their reaction is relevant at all, you just ask them to say it anyways. And then you take [laughs] very careful notes and see what parts of the product that they resonate with, or what part is confusing for them.

SPENCER: Yeah, it's a super useful tool, and I'll just add a couple things on top of that about how I like to do this. One is, I like to make it super clear to the person that you're interviewing that the way they can most help you is by telling you what they don't like, and by criticizing it. Because by default, if you do this with a friend, they're going to tell you that your thing is awesome because they want to be supportive and make you feel good. Even if you do it with an acquaintance or stranger, there's going to be a temptation to not be too harsh with you, because it's awkward and uncomfortable. But if you flip it, so that they actually feel like the way they help you is by making it better, if you say, "Look, I really need to make this better, and I need your help," then it just totally changes the nature of how they can be helpful to you. That's one thing I find really powerful. Another is people are really tempted when they're doing these interviews, to stop the person or direct them. And instead, I like to say, "Okay, I want you to just use this the way you would use it, as though I was not here. Pretend I'm not here. The only difference is that you're just going to speak out loud everything that runs through your mind. So you're just verbalizing your internal content, but otherwise pretend I'm not here." And then you only actually stop or redirect the user when they get really stuck in a way that is just derailing the interview.

CASSANDRA: Yeah, thank you for that. I guess the third thing that I would talk about is, the surprising thing is that you don't actually have to do this with that many people. [laughs] I found through doing this for my statistics project that the usefulness of doing cognitive walkthroughs gets much more marginal benefit after five or six people. And that's a number that comes from user experience research as well, that you only have to do it with five or six people.

SPENCER: I would agree that that's often enough to learn a lot, but just to try to dig into that idea a little bit more, of how many of these you have to do, I think what I like to do is, you do (let's say) three, and then you ask yourself, "Am I learning stuff?" And if not, then you might consider using a different tool, because there's a lot of different ways to learn about your user. And if you are learning stuff, do another three and then say, "Okay, am I still learning stuff?" And so I like to keep going until I feel like I've done three in a row where I feel like I've stopped learning anything new. It can be a way to adjust dynamically because different projects are going to take different numbers of interviews. I think that's a useful thing. Also, I just want to comment on something else, which is that some people express extreme skepticism about this kind of thing, because they say, "Well, okay, so you did five interviews. That's five data points." Imagine someone does a drug trial and five people get the drug and you're like, "Well, you can't learn anything from that." But I think this is based on a misunderstanding of the way data works, because when you do an interview like this, a lot of times when someone struggles with something, you can actually tell immediately that this is actually a problem, but you weren't aware of it before. It's more like making you aware of something that is clearly a problem. For example, imagine you're doing your second interview, and someone gets stuck because they click the wrong button and they can't get out. You don't need to have that happen 100 more times to know that that's a problem. You're like, "Oh, I'd never thought about that. That's a way a user can get trapped. Clearly, that's bad." Or say you do five interviews, and two people were confused by the same thing. Now, if you had asked people, "Does that thing confuse you?" and two of the five said yes, that's a lot less persuasive than if, on their own, when they're told to just speak whatever comes to their mind, two of them organically mentioned that they're confused. It's actually dramatically more evidence, because the chance that they just organically mention the same thing is really low. Whereas if you draw their attention to it, ask them about it, then it becomes a lot less low that they would say they're confused.

CASSANDRA: Yeah, I totally agree with that. I feel like the dimensionality of the information that's coming at you through these cognitive walkthroughs and qualitative interviews is just so much higher than a survey. I definitely feel like there's space for both user research techniques like surveys, versus more qualitative things. But yeah, I agree, you would need a lot more, a bigger N for the survey. They're just complimentary.

SPENCER: Yeah, they're complimentary, and especially if it's a quantitative survey, you need a bigger sample size. But you could also do qualitative surveys. And I like to use the metaphor of a tool belt. There's something like 20 tools in the tool belt of how to get information about improving a product. I think of cognitive walkthroughs as one of the 20 tools, and it's an incredibly useful and powerful tool, but it's not the right tool for everything. And a quantitative survey is another tool and a qualitative survey is yet another tool, and talking to experts is yet another tool, and so on. The way I think about it is, each of these tools is really good at doing some things, but less good at doing other things. And it all depends on the question you're trying to answer. So if you have a tool belt with a hammer and a wrench, don't use the hammer when you need the wrench, and don't use the wrench to try to hit nails into the wall, right?

CASSANDRA: Yep.

SPENCER: Okay, let's go to the pretotyping, I want to hear what that is.

CASSANDRA: Yeah. Pretotyping is a term coined by Alberto Savoia, who was a former Googler, but now lectures at Stanford. And it means pre-prototyping, this idea that even building a prototype may be too expensive for the type of data that you're trying to collect, especially if you give an engineer like me a prototype up, I kind of just go ham and run away with it. [laughs] There are cheaper ways to iterate on the idea and get feedback from users. Alberto really encourages you to think about how you might be able to iterate more cheaply. An example of pretotyping is when the guy who invented the Palm Pilot was thinking about making a personal handheld assistant, obviously this Palm Pilot first prototype would take a lot of hardware and software engineers to make. So he first made a pretotype, and it was just a block of wood that he would carry in his pocket, and he would go around with this block of wood in his pocket doing his daily life. And whenever he ran into a situation in which he imagined the Palm Pilot would be useful, he would pull out the block of wood from his pocket and interact with it as though the Palm Pilot actually worked, to see how that felt and to better understand the user.

SPENCER: That's wonderful, I love that. And I think a really key thing about pretotyping is about reducing risk. By creating the fake Palm Pilot out of a block of wood, he was able to test a hypothesis dramatically faster and cheaper than if he actually had to build one out of plastic, let alone make one that actually worked.

CASSANDRA: Yep.

SPENCER: It also goes to a useful principle whenever you're doing a project, which is this idea of what is the riskiest part of it. For any project, there are going to be some parts where you're like, "Yeah, I could definitely do that part. That's no problem. I have experience with that, or I can see how to do that." And there are other parts, that're like, "I don't know. I'm not sure about that part. That might be hard," or "I'm not sure about that part. I'm not sure if users are gonna like that." And so if you can focus on gathering information about whatever the biggest risks are, that's actually a very efficient way to find out quickly if the project's viable.

CASSANDRA: Exactly. Yeah, actually, I've heard that same de-risking — the journey to navigate a complex map of all the risks and trying to find the least risky path — from the startup circles as well.

SPENCER: It reminds me of when I was talking to a startup founder, and he had a really cool product that he was making. I tried it out, and I was like, "Wow, this is great." And I was like, "What's your plan?" He's like, "Well, I have enough funding for a few months, but I just posted this on Hacker News. I'm getting all this attention, and it's going great." And I'm like, "A few months? Oh, my gosh, you've got to figure it out. You only have three months till you run out of money." [laughs] I think when you're working on a timeframe like that, you don't have the time to do anything other than figure out the biggest risks and eliminate them. And sadly, he then eventually ran out of money. And so I think if he had taken a different approach though, his product is so cool that maybe it could have succeeded.

CASSANDRA: Yeah, the way you told that anecdote is pretty interesting, because it felt like the founder was using the full runway he had allocated to take his one shot. And I think through pretotypes or other techniques, you might be able to take multiple shots with the same amount of time.

SPENCER: Yeah, that's a really good point. And I think another kind of little lesson in that story is that he was focusing on this thing like, "Oh, people on Hacker News are talking about it, lots of people are trying it." That is interesting, and that's something valuable, but that's not the key thing. The key thing is, does it really solve a problem for people or does it really deliver value? Not just, "Can I get people to share it a bunch on Hacker News," or something like that. I felt like he was being a little bit lured in by the wrong metrics that made him overconfident.

CASSANDRA: Yeah, that's another framework. I guess you're alluding to user journeys, as well as the the jobs-to be-done framework, really understanding what is the problem with the current status quo that users are struggling with, and removing that block. Whereas, I think my natural instinct is like, "Wow, that solution is a beautiful idea to a nebulous problem." [laughs]

SPENCER: Yeah, and I think that it's so easy to get lured in by something that's interesting or exciting without realizing that it doesn't really solve a problem for people. Or it doesn't necessarily have to solve a problem for people, but it needs to deliver a lot of value. And that could be by solving a problem or by giving people something they really want or some other strategy, but it really always has to be about delivering value. Otherwise, even if you can get people to use it, usually it's gonna fail.

CASSANDRA: Every designer I talked to at Google about this project, their first questions were always, what is your goal for this project? How would you know you have succeeded? And what is the intended audience?

SPENCER: Such a great clarifying question.

CASSANDRA: Yeah, I think that's taught in design school. In order to give the perfect feedback, you have to know those two framing questions.

SPENCER: I had this idea that I call, "Always be asking the question." Basically, the concept is, if you're building a project or product, there is always some big question that you need to know the answer to that you don't know the answer to. But the first question you have to ask is the meta question of what is the question I need to know the answer to that I don't know the answer to. [laughs] That's the first step, to say, "What is the big question I need to know the answer to?" Then step two is, now that you have that question, you need to go try to answer it. And that's where the tool belt comes in; it's like, "Oh, I've got this tool belt of tools to help me answer the big questions." And then as soon as you have started to answer that question — you feel like you're beginning to understand it — it's no longer the biggest question anymore, and then you have to go back to the meta question and say, "Okay, now, what is the biggest question I need to know the answer to that I don't know the answer to?" So there's this loop of asking the meta question, and then once you define the question, you're asking the question itself, and then using your tool belt to answer, and then going back to the meta question.

CASSANDRA: I'm intrigued by that. This feels like what you were saying about de-risking. The sequence of questions changes as you learn more about the situation. Can you say more about what the meta question is, or the hierarchy of questions?

SPENCER: Let's take your example where you're trying to build a game to teach people probabilistic thinking. When you first started the project, what would you say is the biggest question that you should have tried to answer that you didn't? What I'm asking right now is the meta question. The meta question is, what is the biggest question? It's a question about the question. So what would you say the answer to that is?

CASSANDRA: I should have asked, "Who are these people? [laughs] Who are these people who like Adventures in Cognitive Biases? Who are they? Why do they think that this is useful to their life?"

SPENCER: Yeah, that's great. And then you answer the meta question, which is step two, which is, "Okay, who are these people?" That's the most important question you'd answer. And now, you take out your tool belt and try to answer that. So what tool in the tool belt might you use to answer that question?

CASSANDRA: I'm going to bust out my favorite tool [laughs], which is the cognitive walkthrough, I think.

SPENCER: Yeah, great. So you could watch users use your product to try to help figure out what they find most valuable about it. Is that the idea?

CASSANDRA: Yeah. And before I do the cognitive walkthrough, I usually also ask some open-ended questions about their background.

SPENCER: Right, so that would be even a different tool, just an open-ended interview to try to understand them and what they care about and why they might care about this product. Okay, so now let's say you've answered the question, you know who the user is. Let's say the user is techie types that always felt like they wanted to understand statistics probability better, but never learned it (or something like that, let's say; I'm just making that up). You've figured that out. Now, you go back to the meta question and say, "Okay, now, what's the biggest question I don't know the answer to, that I need to know the answer to?" because you've already answered the last one. And so now you have to answer the meta question again, so that's where the loop comes back. It's this idea of looping the question.

CASSANDRA: Yes, I see it now. [laughs] Yeah, that's very compelling.

[promo]

SPENCER: Let's transition topics a little bit to the idea of design thinking and the jobs-to-be- done framework. Can you tell us what that means?

CASSANDRA: This is a new idea to me, but I am very excited about it. It's yet another tool on the design tool belt. But basically, we've already talked about what is the user's actual problem, who are these actual users for whom we're trying to solve real problems. And the jobs- to-be-done framework takes it yet another level further. The problems in the jobs-to-be-done framework are not, "Oh, I want to learn statistics." That's not a problem [laughs]. The problem under this framework is intrinsic desires for the user. You think about how this user would have a superpower, like what is the core identity change that they're hoping for by learning statistics?

SPENCER: So learning statistics is an intermediate goal. It's not a fundamental goal. Is that the idea?

CASSANDRA: Exactly.

SPENCER: Why do they want to learn statistics? That's the deeper underlying thing.

CASSANDRA: Exactly. And tapping on to that pure underlying motivation, similar to your intrinsic values test, which I took earlier this week, and I really liked, Spencer.

SPENCER: It's funny, it seems to come up on every episode. [laughs]

CASSANDRA: [laughs] Amazing, so yeah, plus one. Everyone, check that out.

SPENCER: We would not invite you to Clearer Thinking if you haven't taken it. But one of the things I really like about the idea of thinking about the underlying goal of the user is that it can help you realize that there's actually a better way to serve the underlying goal than your original plan. For example, I know someone that sometimes advises wealthy donors on how to give to charity more effectively. And she would have this problem again and again, where donors would come in with these really specific plans about how to improve the world. For example, "Oh, I know how to improve the world. We need to get all high school students to take debate class." And then she would ask, "Okay, well, that's kind of specific. Why get them to take debate class?" and then the person would have some theory about how doing debate will help people be able to express their opinions, which is going to help them in all these different ways in their life. And so then, what you could do is talk to that person, "Okay, let's put the idea of debate class on hold for a moment. Let's talk about what are all the underlying motivations you have? What are you actually trying to change in people's lives? And then once we figure that out, we'll keep debate in mind as one strategy, but let's brainstorm some other strategies that we can also potentially use to help them achieve that goal in life." And then of course, the reality is debate is probably not the most efficient way [laughs] to get them to achieve that goal in life.

CASSANDRA: Yeah. Wow, Spencer, [laughs] I think that's the perfect natural segue, or you have just invented the theory of change framework.

SPENCER: Alright, let's talk about the theory of change framework. How does it work?

CASSANDRA: Theory of change framework emerged out of the nonprofit space, like you're saying. It's a framework that allows nonprofits to pinpoint the underlying assumptions for their interventions, exactly what you just talked about. Say we have a nonprofit that gets people to learn how to debate but the theory of change asks people to unpack that into the unspoken assumptions, the true map of the space, like what is the true goal? What are all the levers in the space? What are the secondary levers? And only when you have this true map of how this nonprofit thinks that the world operates, can you prove that the lever — the intervention that you're proposing — is the best way to solve the problem.

SPENCER: Absolutely, or invalidate it and show that it's actually not the best way or that it doesn't work.

CASSANDRA: Exactly. That is usually what happens when you make one of these maps. [laughs]

SPENCER: Entrepreneurs sometimes ask me to talk to them about their ideas, and I've been trying to think about how do I help an entrepreneur with their idea? And one of the things that I realized is that, let's say I only have half an hour, one of the most helpful things I can do is ask them for their very specific theory of change, like, "Okay, you're gonna help population X, who normally does action Y in situation Z, instead do action W," or something like that. I try to get them to state it as concretely as possible, how a specific thing in a specific place is different than what they would have been otherwise, and then how that leads to a change or benefit for them. And what I find when I do this with entrepreneurs is that occasionally they're able to do this exercise immediately, but more often, they actually struggle to do it. And one of two things emerges, both of which I think are useful. The first thing that emerges is that they realize that there's gaps in their plan. In other words, their idea might be promising, but it's not fully fleshed out, and so this exercise can help them flesh it out further and make them realize that there's other aspects of their plan they have to put in place. The second thing that can help happen is that they might realize that their plan makes no sense, basically, that it's not really plausible that the thing they're doing will actually lead to a specific change in behavior for a specific person at a specific time, and that they might actually need to use a different strategy. And either way it comes out, I feel like it's just one of the most useful things I could do in a short amount of time.

CASSANDRA: Yes, that sounds hyper-efficient. I would love to actually see your checklist for how you launch those projects. It sounds like you already know all the things I've spent five years experimenting and learning.

SPENCER: Well, I've also spent a while experimenting and learning so I feel like we've been on similar journeys in that sense. I've made lots of mistakes and learned from them [laughs] so it's been rather useful.

CASSANDRA: Do you have a checklist like, "In this situation, I use this tool. And in this other situation, I use this tool?"

SPENCER: Oh, for the tool belt, you mean?

CASSANDRA: Yeah.

SPENCER: There's a lot to say about when to use different tools in the tool belt. If we go back to this idea of looping the question that we were talking about before — asking the meta question and then asking the question itself and trying to answer it — to me, the choice of tool in the tool belt depends completely on the question you're trying to answer. For example, if you're trying to answer the question of, "Why is it that my users who started using my product, all stopped using it?" Well, there, you might have to interview people that started using it and stopped. Whereas another completely different problem you might have and a completely different question is, "How do I actually solve this problem for people?" There, you open in structured interviews with people or you might go talk to experts in the field, or you might go read books on that area. So to me, the tool in the tool belt is mapped to the question, and that's how you should decide. It's hard to give a clear checklist because it really depends on the nature of your question.

CASSANDRA: Yeah, well, maybe a crib sheet, then [laughs]. Maybe we should summarize all the techniques we're calling out in this podcast with links for further reading along with the questions that they answer.

SPENCER: We should make an infographic together of the tool belt and which tool settles which problem, I think that'd be wonderful. [laughs] So now I want to talk about your journey. You worked on the shed for five years, you learned all these interesting lessons about how to do things better. And then what happened?

CASSANDRA: I finally finished it. With the six months allocated at Google, and with all these new design ideas, I finished and shipped The Wizard's Guide to Statistics. You can find that on my website.

SPENCER: Yay!

CASSANDRA: [laughs] Yeah, yay. And also there is a post-mortem of everything that I learned through this experience, and I also have a blog post on how to make cakes, not sheds, and another blog post on all the tools that we've called out so far.

SPENCER: Before we talk about your transition to the next phase of your life, I just want to ask you: you spent so much time working on this project. Why do you care so much about teaching people probability?

CASSANDRA: Honestly, it was one of the coolest things I learned at MIT. I went to MIT for the computer science undergrad, and then I did a master's at the Media Lab. And during that time, there were a few classes I took. One was probabilistic cognitive science class, so it's very much about seeing the world as probability distributions. Your belief is a probability distribution, and then, as new information comes, your probability distribution changes, what you believe changes. And it just blew my mind that there's a mathematical formula for this, like Bayes' rule. You can use Bayes' rule to update your beliefs accurately, and if you don't use Bayes' rule to update your beliefs accurately, then wonky things can happen and you don't make the best decisions in life. So play Adventures in Cognitive Biases to find out more. But this is basically how two different professors at MIT taught me to see the world differently. And I just wanted to package that up and have that in the world.

SPENCER: That's awesome. I think it illustrates an interesting point that I don't think I've ever thought about, which is this idea that you can imagine this Venn diagram, which is the things that your users really care about — what their problems are or what they value — which is what we were talking about earlier of trying to unpack what does your user really want? And then on the flip side, what do you really want to create in the world? What do you value? And actually, your product needs to be the Venn diagram intersection of these two things. And so it speaks to this external process of understanding your users, and also carving out who your user's going to be because you can choose your user to some degree, and then you need to study that person you choose, and maybe you'll adapt it over time (who your user is). But then it also talks about this internal exploration of what are you trying to do? What are you trying to create in the world? And really both of those things that have to come together in your product.

CASSANDRA: Yeah, it does. Actually, that's such a thoughtful question, Spencer, because I guess in my answer, you're going to realize that I was nursed by the beauty of the idea [laughs]. I was like, "Wow, this idea is so beautiful, Let's make it." Whereas all these techniques are saying like, "No, no, no, don't do that, Cassandra." I think what this is causing me to reflect is that it's important to start with what you actually want to happen in the world. And what are the barriers from reaching there and working backwards to design a solution, rather than getting lured away by beautiful ideas,

SPENCER: I will also say though, it seems like something else is at the root of your quest, because you made Adventures in Cognitive Biases, but your new game — which I love, by the way, and I think everyone should check out— it is not specifically about Bayes' rule. It's actually about learning statistics more broadly and understanding statistics on a deep level. Would you agree with that?

CASSANDRA: Yes, that is true. Because once I made Adventures in Cognitive Biases, which I think did cover this Bayesian update of beliefs decently, I had spent so long on it, I'm like, "Okay, I'm gonna stop finicking with this." [laughs] I did try to redo it better, but I decided to just leave it in its original form. There are so many other concepts that are super, super important in everyday life, especially in the current climate, like logical thinking — not even probabilistic thinking — at this point, seems like a [laughs] rare skill.

SPENCER: What are some of the things that you teach in Wizard's Guide to Statistics that you think are most useful to people?

CASSANDRA: The concepts that I chose to cover in the Wizard's Guide to Statistics are the basic probabilistic building blocks that are both useful to everyday life, but also, if you're interested in a more technical field like machine learning, I think it's the prerequisite. We cover independent events like what randomness looks like, conditional probabilities, the chain rule, expected value, p values, just the probabilistic and statistical concepts that you are exposed to in everyday life, regardless of whether you recognize it or not. And if you do recognize it, it's so cool to be able to use these concepts to model your world.

SPENCER: That's awesome. If I think about why I see those concepts as valuable, I think each of them has a really specific purpose. For example, understanding expected value is really a decision-making tool. By thinking about a decision you have to make in the world, you can think about, "Okay, which one has the highest expected value," which basically just means on average, which one do I think is going to be best. But also, something that's really powerful about expected value is that you can show that, for small bets, the optimal thing is just to maximize expected value. For example, if you're playing a betting game like poker, if you are making relatively small bets — you're not betting your house, you're making small bets — if you try to maximize expected value, you ought to do the best over a long period. That's not necessarily true if you're making really large bets, so I think we would need a different framework to think about that. But that's what I see as one of the most powerful reasons to understand expected value.

CASSANDRA: Yeah, and I also feel like it's a form of literacy. I actually did some user interviews [laughs] before building this this time. I talked to a number of the security guards on Google campus, and I was giving them probabilistic questions. For instance, I went around and asked people, "Hey, if I'm running a raffle, and the grand prize for this raffle is $10,000. And I am going to issue 10,000 raffle tickets. How much would you pay me for one raffle ticket?" This is a very basic expected value question. And the answer should be one. But I got all sorts of different answers.

SPENCER: Most one, right? [laughs]

CASSANDRA: So it's a little risk averse. [laughs] But I was blown away that I got all sorts of different answers. Most of the answers I got was two dollars, and someone went up to $500.

SPENCER: Well, I wonder if people were also just trying to please you as well.

CASSANDRA: [laughs] That was part of it, because I did ask them a little bit deeper like, "Oh, why would you pay that amount?" And some of them were so thoughtful. The answers ranged from, "Oh, that's what I can afford," and "I have a chance of winning $10,000, so it seems like that's what I can afford. So that's what I can put in." And some people were actually even worrying about the lottery, the raffle organizers, making sure that they got a cut, too [laughs] so they wanted to be more than fair. But yeah, I think anyone trained in probability would just answer, "One dollar." Whereas this was a non-trivial question that other people gave a lot of thought to, without formal training in the topic.

SPENCER: Well, it just shows that having a framework for these kinds of things can simplify certain types of decisions. In real life, usually things are not as easy to analyze as like, "Oh, you know the probabilities of each outcome exactly." Still, just knowing that in the theoretical optimal betting, you're going to play this way, can help you think about real world problems.

CASSANDRA: Exactly.

SPENCER: I also think with Wizard's Guide to Statistics, something you do that's really cool is the way you focus on getting an intuitive feel for randomness and probabilities, like you have to generate all these different outcomes of coins, and then you have to actually say which coin is acting in a funny or surprising way that you wouldn't expect. I liked that a lot because I think in real life, we're often thrown off by randomness. We have this general feeling that randomness feels a certain way, for example, that there aren't a lot of clumps in randomness. But in fact, randomness tends to be a lot clumpier than people realize. If you're flipping a coin, you might think that, usually when you're flipping it, you'll go heads, tails, heads, tails, there'll be a lot of flipping back and forth. In practice, if you flip it a bunch of times, you'll see these long clumps of heads, heads, heads, heads, and that really surprises people. But that's just the way randomness works. I think we're often tricked or confused into thinking that things that are actually random are not random.

CASSANDRA: Yeah.

SPENCER: But I think you have a really nice solution to that, which is actually having people play with randomness and try to get an intuitive grasp.

CASSANDRA: Exactly. And the other reason why I think it's so important to build intuition for these topics rather than being able to manipulate the symbols with pencil and paper, is that intuition means that it is embedded in your gut sense, that you can begin to have quick reactions to what it should be. And that's what you need in the real world because in the real world, when you encounter these probabilistic settings, you're not gonna say, "Oh, stop, let me use the pencil and solve for this." You do need to have a gut sense in order to deploy this.

SPENCER: But the way I think about it is that you first have to understand it on an intellectual level, and so you have to teach the concept, but then you want to push that intellectual understanding down into your intuition, so that you no longer have to constantly think about it and reflect on it to use the idea.

CASSANDRA: Exactly. And one of the hacks for doing this is by using your visual processing power, so by using images and visualizations. For instance, when these games teach Bayes' rule, you don't think of the Bayes' rule numerical formula. These games present it as bar charts and distributions because then you can use the parallel processing power of your eyes and your visual system to judge the ratio between the bars and compute the odds ratio form of the Bayes' rule instantly, using areas.

SPENCER: That's really neat. And I think it's really important to realize that we have these different systems in our brain that are good at different things. And for many people, the visual system is much more effective at some things than other parts of the brain, and a nice example of this is with memory. If you can find a way to visualize the thing you want to remember, it often just makes your memory much more accurate. A trick that I use sometimes if I just met someone and I want to remember their name (because I'm not the best at remembering names normally). Let's say I meet someone named Tom, and I want to remember that that's Tom, I'll actually think of another person I know — either a real person I know or a celebrity with the same name, so in this case, maybe Tom Cruise — and then I'll actually imagine them interacting. And ideally, the interactions should be something that relates specifically to Tom Cruise. So maybe I'll imagine him coming down from the ceiling on a rope like in Mission Impossible, and interacting with this other Tom maybe by biting his face or his ear or something. The idea is to try to make it really...

CASSANDRA: Salient.

SPENCER: ...weird and visceral and salient, like that's so creepy and weird that Tom Cruise would just bite him [laughs] but you're not going to forget that, right? And the fact that he came down from the ceiling like in Mission impossible, that means you're going to remember it's Tom Cruise, because that's very linked to Tom Cruise. And so now, there's no way I'm going to forget that guy's name because I'm going to link it to that image of Tom Cruise coming down from the ceiling. Using your visual memory is just so powerful. Whereas if you try to just remember Tom, Tom, Tom like a loop of audio, it's actually much harder.

CASSANDRA: Fun fact, my middle name is Tom. [laughs]

SPENCER: Oh, wow! Well, now I'll remember that. [laughs]

CASSANDRA: Tom Cruise biting me. [laughs]

SPENCER: Okay, so I want to talk about your standard for yourself. From my perspective, I feel like there's a gap. You've talked about all these wonderful tools for building products more effectively, and making sure you're adding value and so on. But I don't think we've really gotten to why was your standard for yourself such that you had to build this thing nine times, and you were not satisfied?

CASSANDRA: Well, honestly, I think a lot of things went wrong in the nine times that I built them, because I didn't have all these design tools at hand.

SPENCER: You were applying your own internal standard. What would it be like if you build version number six, and then what happens? You look at it, you try it out, and you're like, "Ah, this is not what I want," [laughs] and then you go and build version number seven. Is that what's happening?

CASSANDRA: Oh, yeah, more anguish in between. Usually, it starts off with I come up with an idea. I'm like, "Wow, wouldn't that be nice?" So again, folks listening, don't do that. Back-solve from the actual problem, and don't get lost in the beauty of the idea. But yeah, so I get entranced by the beauty of the idea, and I'll be like, "Oh, wow, I'm gonna go build that out and see what it looks like.'' And I build it out, I'm like, "Ah, that's not as nice as I envisioned." And then I spend some time trying to make it better. Then I'm like, "Oh, it sucks." And then I throw it away and I anguish for a bit and months later, I come up with another idea and repeat the process.

SPENCER: The reason I'm getting into this is because I think this is something that a lot of people can relate to. I think this is a really common thing, where people will have this idea for something, they try to make it and they're unsatisfied with what they make and then they trash it, and it can spin a lot of cycles that way. I don't think it's fully explained by the design tools we were talking about. Yes, those are really powerful techniques. And yes, if you'd use them, it may well have gone better. But I feel like there's something else going on that you're skirting around, which is something about your self-evaluation. You're imagining this thing in your mind, then you're going and trying to build it, and you're like, "Oh, that's garbage." What's going on? Why does that happen again and again to you?

CASSANDRA: I'm probably not a good judge of my own work. Like I said, I didn't feel like Adventures in Cognitive Biases was very good but then external reception was positive.

SPENCER: Doesn't that suggest though, that even if you're using the design tools and users were saying this is good, that you still wouldn't have believed it?

CASSANDRA: The design tools put the thing in front of actual users. So I think it does help because, if users are excited about it and the creator runs into enough excited users, then they'll probably have the confidence to ship it. But I guess the other thing that I have found helpful in going through these nine different iterations is separating the developer role and the product manager role. So I will usually ask another friend to be the product manager, the one who tells me whether it's good enough to ship or not.

SPENCER: Oh, that's nice, that's a good trick. Good, okay, so I'm gonna throw out a hypothesis. This can be totally off. My hypothesis is that you have this vision in your mind that's unrealistically good. And it's maybe good on dimensions that are almost impossible to satisfy. And then you go try to build the thing in the world, and the world is complicated, and it's actually hard to make stuff, and code gets complicated and messy over time. And then you compare this thing you built to this perfect vision in your mind, and you're like, "Ah, it's trash," because it doesn't live up to it, but it never will. What do you think about that?

CASSANDRA: That's very possible, because it feels like another axis to trade off on is the size of the thing — size versus polish — because if something is smaller, you can polish it more, and it will be shinier. And I don't have as many bones to pick with it. [laughs]

SPENCER: Another reason to make a cake instead of a shed?

CASSANDRA: Exactly, exactly. [laughs]

SPENCER: I would also say though, if you focus on the thing you're really trying to achieve in the world with what you're building, it maybe can help avoid that kind of judgment. Because sure, the thing may not look exactly like you wanted, it may not live up to the ideal in your mind. But if you're like, "Well, but the point of this is to achieve a specific goal, and I can tell I'm achieving it. Yeah, it's not shiny and beautiful, but it does the thing that it's supposed to do." Whereas if you don't have a really crystal clear idea of what you're actually trying to do with it, then you're just comparing it against this idealized form.

CASSANDRA: Oh, Spencer, you're so wise. [laughs] That really echoes a quote from one of my favorite creators, LambCat, who makes a weekly comic, and it's a ton of material to produce. It's a very long comic strip called the Cursed Princess Club. [laughs] But there's both a comic and there's also sometimes music and sound. It's just a huge production for one week. And in their Patreon, they said that their heuristic for when it's good enough to ship is when it's the bare minimum to get the sense through, to get the idea through, so echoing what you said.

SPENCER: Yeah, and it's interesting to think about the different forms of minimal viable product, or MVPs. One form of it is the minimum thing you can show to someone to get feedback or to do a cognitive walkthrough, and that might be a piece of paper that you just drew some stuff on or whatever. Then another version of an MVP is the minimal thing that does what it's supposed to do to the minimum degree, like it maybe has almost no features. A third version is much more complex, and that is the minimum version that solves the problem in the best way that it's ever been solved for that specific audience, which is a much more advanced form of MVP, but it's still minimal in a certain sense. So imagine you're trying to make a to-do list app that is specifically (I don't know, I'm gonna make something up) for chemists. It really hasn't proven its worth until it is the best to-do list app for chemists. It doesn't have to be the best to-do list app for everyone in the world. It doesn't have to have every feature you want it to have. But let's say it's the third best to-do list app for chemists, why on earth would a chemist use it? First of all, they could use the first or second best. Also, they've never heard of your company before and maybe your product will cease to exist in six months. So it's this other kind of minimum viable product; it should be the best to solve the specific problem the user has. Creating that kind of minimum viable product also suggests that you really have to focus on exactly what you're trying to solve and make it as minimal as possible because that's such a high bar that you really have to focus.

CASSANDRA: It is, yeah, and just to echo that, a friend who's a startup advisor as well as serial entrepreneur says that, when you seek out to be the best in the class, it's not okay to just be 20% better. You have to be 10x better. [laughs] You need to have an idea that you think will be 10x better than the existing player in the field.

SPENCER: I think there's a lot of truth to that, but I'll just add something to that, which is that you certainly don't have to be 10x better in every way. And in fact, you can often be worse in a number of ways. But you probably have to be something like 10x better in some particular way that users really care about. An interesting example of this would be going from regular photographic cameras to digital cameras. In many ways, digital cameras sucked. The early ones had way lower resolution, but they had some really great advantages over a photographic camera. It's totally unrealistic to start a startup and then just be 10x better at everything relative to your competitors. You have to hone in on something that the user truly cares about, and that's the thing that you have to absolutely crush it at. It doesn't have to be all types of users; you just have to be 10x better at one thing that some types of users really care about.

CASSANDRA: Yes, thank you for that clarifying [laughs] nuance. I like that a lot.

SPENCER: We'll also add, why 10x better, because that might seem mysterious. And obviously, this is just a rule of thumb; maybe 5x or 3x is fine. But I think that the reason that rule of thumb exists is because there's a huge switching cost. Someone is probably already doing something, or they have to learn how to use your system, or they have to give you money or something like this. And so if you're only 20% better, it can be really hard to convince people to bear that switching cost, especially if you're unproven and they've never heard of you. Whereas if you're 10x better, or let's say even 5x better, that's a pretty compelling reason to try your thing and invest that extra time. Alright, with that, I want to jump into your latest transition. Okay, you finished your shed, you shipped it, then what happened?

CASSANDRA: Yeah, so I shipped it, and then I stuck around on this lovely prototyping team inside Google research for about another year. Then the climate bug bit me. I've always been climate-concerned, and I just funneled money through donations to the Rainforest Trust and environmental organizations on the side. But I wrote a blog post about how even Wall Street is pitching in for the climate crisis, and I reconnected with a friend who also turns out to be climate-concerned. Once the two of us got to talking, we decided to both quit Google and work on the climate crisis, because basically, we as a society only have ten, 20 years to reach net zero carbon emissions and stay there forever. So it's all hands on deck. I honestly didn't realize the urgency of the problem until I did the back-of-the-envelope calculations and read the IPCC reports.

SPENCER: Just so the listener has an idea of what you're referring to, what do you believe is going to happen in ten to 20 years if, let's say, we continue polluting like we are now?

CASSANDRA: If we don't reach net zero carbon emissions in ten, 20 years, then we're going to blow past the 1.5 degrees celsius, two degrees celsius thresholds that we've given ourselves, and the world is going to warm a lot more than that. So basically, if we do nothing, it looks like it's going to be four or five, maybe seven or eight, degrees Celsius warming, and at which point, we start...

SPENCER: Over what period?

CASSANDRA: Over the next 100 years [laughs]. It's pretty uncertain, because once you reach these dangerous high levels, it kicks off positive feedback loops. So scientists and the models that we have don't really know how much it's going to warm. Because once we start warming the world to a certain extent, it kicks off these loops. For instance, the ice melts, and ice, because it's white, was reflecting heat back into the atmosphere. So then, if we just have more dark oceans that it's melting into, that's going to heat the world, and we're going to have more natural disasters and fires, which also release more carbon into the air, lots of positive feedback loops that are hard to model. No one really knows, but it's dangerous and unknown and potentially catastrophic.

SPENCER: Let's talk about applying probabilistic thinking to this topic. Because I think that some of the ideas we were talking about earlier, it's really relevant. When I think about climate change, I see two things happening simultaneously. I'm curious to see if you agree with me. One is, it seems like people are very focused on the mean prediction. But actually, the really scary thing is if it turns out it's in the 80th, 90th, even 95th, percentile of our estimates. In other words, we expect it to have very bad nonlinear effects where, going up one degree, okay, that causes problems; going up five degrees, that might be totally catastrophic. And so if you think about our models and our mean estimates, if it's right in the mean, that's not nearly as scary as if it's like, "Oh, actually, we underestimated. It's actually twice as bad," and maybe that's not that likely, but it's so much worse that, to me, a lot of my fear around the topic resides in those upper-end estimates. What do you think about that?

CASSANDRA: I think that's very statistically literate of you [laughs]. Yeah, I think we toss around the mean because it's easy. To convey a mean, it's a single number. But really, yeah, if it's a distribution, the higher tail end of the distribution is really scary. I think, unfortunately, we've been exceeding the warming estimate.

SPENCER: Yeah. So the second point I want to make about this is that, I think with many things in life, we have the intuition that the more uncertainty there is, the less we should worry about it. And I think that's often right. But I think that in things like this that could be really, truly catastrophic, greater uncertainty can actually be a reason for greater concern. And imagine the variability of outcomes increasing — like we're less and less certain about how bad climate change will be — because of this nonlinear effect, that actually puts more risk in the tails, which is where the really, really bad stuff happens. So if we knew it was gonna be one degree, that's actually less dangerous than if we're unsure. It could be anywhere from zero to two degrees, which is less dangerous than negative one to three degrees, because you're pushing more and more risk into the tail. So I think that people's intuition is like, "Oh, yeah, the more uncertain we are, that means we should ignore it." It's actually the reverse. What do you think about that?

CASSANDRA: Yes, totally. It's a cognitive bias actually, that when things are uncertain then people are less likely to act. As someone who does know things about cognitive biases, that makes me want to act more, because I'm like, "Oh, gosh, because it's uncertain, people are unlikely to act. Because it's a collective action problem, people are unlikely to act. So gosh, we really need to act." And also, historically, people have not been acting. Yes, all science points to, "Hey, everyone, act!"

SPENCER: Yeah. Now, another point that I'm curious to get your feedback on is, for me, if you're thinking about something that's truly catastrophic, it flips the role of evidence, because normally, if something's really extreme, we say, "Well, in order to prove something really extreme, you'd need really extreme evidence." But I think when we're talking about something that could be absolutely catastrophic for society, or the whole world, even a little bit of evidence that the thing might happen seems like it's enough reason to take it very seriously. My feeling is, even if I thought there was only a 20% chance that the whole climate change thing was really a big deal as most scientists claim, that would actually be enough to still allocate lots of resources to the problem.

CASSANDRA: Yeah, because you're taking the expected value alongside your utility function. If there's a small probability that this is gonna happen — which it's not small, honestly [laughs], it's very, very likely — that we're gonna get a very bad result, then we need to multiply those in order to get an indicator for how much we care.

SPENCER: I think with climate change and other issues that we're really talking about something potentially catastrophic, like bioengineer terrorism where maybe someone will try to make a super virus, or the future of advanced artificial intelligence when maybe someone will make an AI that actually has a bunch of damage to society, these things, because they could be potentially so negatively transformative, the burden of evidence to me is a lot lower in order to take it seriously. Even if these things had a 20% chance, we should take them dead seriously and we should allocate really significant resources to try and understand them better.

CASSANDRA: Yes. Do you have an idea on how we might be able to deploy these resources given that this requires both psychological and statistical literacy to get people to appreciate the problem?

SPENCER: Well, when it comes to climate change, the thing that I often come back to is that it seems like getting individuals to change their personal behavior when it's against their self-interest has been tried hundreds of times and just not been that successful, for example, getting people to personally make decisions where they pollute less. And often, it also seems like these decisions get corrupted; people tend to focus on things like using paper straws, which are very visible ways of signaling that you care about the environment but that, in the grand scheme of things, are really not the core issue at all. And so I think I'm somewhat skeptical of these individual behavior change-based strategies. Not that people shouldn't make that choice for themselves and do what they can; I think they should. But I just don't see that as the long- term strategy that's going to actually work.

CASSANDRA: I totally agree. And I don't think it's fair to ask people who have real problems in their life to constantly be making these trade-offs between what's convenient and what's good for the environment. I do feel like the change needs to be at the governmental regulation, corporate, and more scalable level because, as we've seen, even with the coronavirus lockdowns, our emissions have not dropped that much, even though we as a society are consuming much less than before.

SPENCER: One of the things I noticed about the way that climate change is often presented is that it feels to me like scientists and journalists and so on, who think climate change is a big deal, are a little bit nervous about presenting the true amount of uncertainty. In other words, I sometimes feel like they overstate the level of certainty. I totally get why they do this because they're like, "Well, this is a really big, important thing that could actually cause hugely catastrophic consequences. And so many people are science deniers or they don't want to take it seriously. If we present it as uncertain, that gives more ammunition to the other side." But to me, this is a pet peeve, because my general attitude is it's actually a better idea to present it with the uncertainty we have around it, but point out how, in fact, that uncertainty shouldn't make you feel any better. My general sense is that there's a lot of evidence that the planet is warming due to human behavior, but that the actual models have more uncertainty than is generally acknowledged in the press. Do you think that that's true or do you think I'm wrong about that?

CASSANDRA: Historically, I would say that is incorrect. For anyone who's curious about this, there's a great piece from the New York Times Magazine called "Losing Earth" and historically, back in the '80s, when there was actually a lot of political consensus about climate change, bipartisan support — "We should do something about this problem." I was surprised to learn that the older George Bush got into the White House with the campaign, "We will combat the greenhouse effect with the White House effect" — so yes, there was enormous bipartisan support for doing something about climate in the '80s. And it was actually the scientists who let us down because they were not willing to take a more declarative stance. Both of us have scientific training, and we are so used to, "Nothing's certain for sure." I mean, that's true, nothing is certain for sure. But the scientists just could not agree on the wording to be used for how to frame how likely this is to be a problem. And because of that, the movement was able to get sideswiped politically.

SPENCER: But do you disagree with what I'm saying about today? My feeling is that actually, very often when you read about climate change, it's viewed as though, "Oh, these models are super accurate. And we know exactly what's going to happen in the next 20 years." Whereas my sense is that there's actually a lot of uncertainty. That uncertainty makes me feel no better about it. Like I said before, it actually makes me feel worse about it, makes me think it's riskier, but I'm curious to hear whether I'm off-base. Maybe they're not overstating the knowledge we have.

CASSANDRA: They're models [laughs] and I think a lot of them are written in Fortran. They're kind of sketchy. [laughs] A lot of the models are also based off of other models in the same portfolio. There's not a single model. There's a portfolio of models. Different labs have different models and models have their own different predictions. And so the IPCC looks together at the portfolio of models to make the decision and estimate the lower and upper bounds.

SPENCER: Well, my understanding is that there's a few different sources of uncertainty that you could combine. One type of uncertainty is, we know that there are some things that are not in the models. For example, my understanding is that cloud cover is very hard to model and has some effects, and I think they've made some progress on it more recently, in the last few years. But that's an example of something that, previously, was not in the models, or not even very accurate, and we knew that. That's a known problem and that's gonna be getting better. Of course, we're trying to add more and more factors into these, essentially physics simulations. So that's one type of uncertainty. A second type of uncertainty is that the models disagree with each other to some degree; they don't all exactly agree. And so then there's this uncertainty around which model to trust and that actually creates another layer of uncertainty. A third type of uncertainty comes in because we can't predict the future. If we think about what's going to happen in 20 years, it actually depends on economic output, it depends on behavior change, and it also depends on technological progress, it depends on regulation. There's all these exogenous factors that have nothing to do with climate per se, but they actually do influence the outcome of the model. And then finally, fourth, I point to unknown unknowns. It could turn out our models have problems that we don't even know that they have. For example, as you mentioned, many of them are based on really old Fortran code that's running some kind of physics simulation. And who knows, there could be just assumptions baked into that, that we don't realize are faulty assumptions, or missing important dynamics. My suspicion is that, when you add up all those factors, there is more uncertainty than is generally stated about the models. That being said, I still think it's something we should take very seriously as a problem.

CASSANDRA: To your second point, there is an uncertainty interval provided with these models, because there are many models. Like you said, models are outputting different predictions. So we have a range over predicted values generated by this portfolio of models. And I guess something else that may help is that these models have been predicting for decades. So we have been reconciling the current atmospheric CO2 predictions with the model predictions and seeing where our actual path relative to the predicted band. We know that these models are behaving optimistically.

SPENCER: But that's another interesting point, the fact that they're underestimating suggests that there is something missing, which shouldn't make ourselves feel better [laughs]; it should make us feel worse. But it does suggest that there's some kind of problem. The thing about modeling like this though, is it's so hard to account for all these forms of uncertainty and add them up properly. You can say, "Well, we have five models, and they all disagree, and we can use them as a distribution." That's one type of uncertainty. And you could say, "Well, each model, we know that it itself has an uncertainty band," because it's doing some kind of physical simulation. And then we also know that there's some things that we know we're not modeling, and we could try to add that. And finally, there's the unknown unknowns, which are the hardest to add. We don't know how to add that. So yeah, I mean, obviously, it's a subjective question of what does it mean to be really accurate? How accurate does it have to be before we can say it's really accurate? But yeah, I don't know if you and I actually disagree, or if we just have different subjective senses.

CASSANDRA: I totally agree with you that the models aren't perfect, the models are inaccurate. But my main point is that it doesn't feel as though we have the time to wait for a super accurate model before acting. And I feel confident acting because we have gotten a number of data points because these models have been in production for decades. And we can see that what actually happens in the world if you plot that, it makes the models seem over-optimistic. So what's actually going to happen if this trend continues is probably worse than what the models are predicting. So we need to get on it.

SPENCER: Yeah, like I said, even if I only thought there's a 20% chance of this being a real issue, I think we should take it much more seriously than we do right now in the US. And I think it's easily more than 20%. So I'm totally sold that this is something we should be taking really seriously. Tell me about your journey and your approach to working on this.

CASSANDRA: Maybe I can just share a quick story before that on the uncertainty. I went to a science and tech high school in the US and it used to be ranked as the number one high school in the United States. And our summer assigned reading was a science fiction novel, called "State of Fear" by Michael Crichton, and it was about global warming. It's science fiction [laughs] but it's written very realistically, and he makes the argument that we don't need to worry about climate because all these measurements have uncertainty and error to them, so why are we getting all worked up out of nothing? And that's basically the whole book; it's a science fiction novel to create climate deniers. Being an easily impressionable rising high school senior at the time, I was reading this book and was nodding all along, like it was pretty persuasive. It was just like, "Yes, of course, there's uncertainty in all the measurements. Yes, of course, who knows what's happening? Is this actually caused by humans or not?" Obviously, I've educated myself since then. But I guess I'm very upset about this book because it taps into lots of human cognitive biases towards inaction. This uncertainty makes us feel like we're not responsible for acting. It makes us question our scientists in a way that just leads to inaction and total destruction. It's like what Trump does. Help me formulate this [laughs]. They both question. Trump caused people to distrust the media. So if you don't trust the media, then you're kind of screwed at that point. It's the same on the climate front. If you don't trust the scientists, if you start questioning the process, I don't know.

SPENCER: I think what you're getting at is that, if we distrust scientists, then how do we make progress on topics like this? Because besides scientists, who else is actually studying climate change deeply, and trying to figure out what the danger is and so on? Once you throw scientists out, we have no way to make progress on this question.

CASSANDRA: Right. And the model uncertainty is a true thing that exists but the scientists know that. Anyone who is scientifically trained knows that there's always going to be model uncertainty. 'Uncertainty' is like a technical term; it's not a reason for distrust. It feels like that word out in the public — 'uncertainty' — means something different.

SPENCER: Right, 'uncertainty' in the world means, "Oh, maybe we should wait, maybe we shouldn't act." But uncertainty to a scientist is a good thing; it means you have a model of the likelihood of something and the range of possible outcomes. So uncertainty is not inherently bad; it's fundamental to doing science. Yet it can be used to discredit something. I think, to me, a good metaphor would be, imagine you thought there was a 50% chance that you might have some horrible, deadly cancer. You're not going to be like, "Well, it's uncertain. I'm not going to do anything about it." No, you're going to take it really seriously. The fact that it's only a (quote unquote) "50%" chance, that shouldn't make you not deal with it. And similarly, uncertainty in climate change shouldn't make us not deal with it. If it's going to happen, it could be like cancer but for the whole world. So uncertainty itself is not a reason for lack of action, especially when it comes to something so serious. If it was something really minor like, "Oh, you have a 50% chance of this really mild condition," okay, well, maybe the fact that it's 50% uncertain makes you even slightly less likely to deal with it than you would otherwise, which is probably not very likely, because it doesn't really matter either way.

CASSANDRA: Right. And what I'm kind of getting at is that it seems like there's some unfortunate technical punning that's happening in this space, like 'uncertainty' is the technical term, but then in common English, it means something else, like expected value is a technical term [laughs] but I think it also means something different out in the world.

SPENCER: This is similar to the usage of the word 'theory.' A scientist might say, "the theory of evolution." But to a layperson, the word 'theory' can mean something that's uncertain. And so there can be a confusion like, "Oh, if they called it a theory, that means that they're uncertain about it." Whereas the evolutionary biologist will be like, "Well, no, I'm really not that uncertain about evolution. I'm almost certain that it happened." But you know, the word 'theory' can have different meanings.

[promo]

SPENCER: Let's wrap up by talking about your new project and the direction you're heading with it.

CASSANDRA: Yeah, the new project. My buddy, Eugene, and I both left Google to work on climate, and we're also applying all the design principles that we talked about in the first part of the podcast to our new venture. When Eugene and I sent off our goodbye emails like, "Bye! We're leaving Google to work on climate," it was super well-received. Eugene's post even went viral on LinkedIn and got half a million views and hundreds of people reached out to us saying, "Hey, we're thinking about making a similar transition, but we don't know where to start."

SPENCER: Why do you think it resonated so much with people?

CASSANDRA: Oh, gosh. I think this is on a lot of people's minds, but I think different people are stopped from doing something for different reasons. Maybe some people don't think that there's hope, or they're not sure what one person can actually do in this space, or perhaps they're not sure how their skills fit into the space. And yeah, so I guess there's a lot of ambiguity. And EJ and I have worked through these questions ourselves, and we're trying to create a community. It's called Work on Climate. We're on workonclimate.org. You just pull everyone who's climate- concerned into a meaningful job or long-term project in climate, or if you want to start a climate startup, we'd love to help you meet your co-founder, because scaling people is probably the most impactful thing that we can do in the short term. We just need way more people working on this problem.

SPENCER: That's really cool. Before talking to you about this, my view on strategies to try to make a massive difference in climate change was that there's only two really viable strategies that can have a massive difference. There's many, many things that could have a small difference. But in terms of having a massive difference, where it really solves the problem, the only two strategies that I could see were: one, political change, like if the US had a president that really cared about climate change, and then that President worked with the Chinese government and the Indian government, and they all collaborated, that can make a massive difference. The reason I mentioned those three countries is because my understanding is that a very substantial proportion of the greenhouse gasses are coming from those three countries, and so you really have to get them on board. And then the second seemingly viable strategy for making a huge difference would be a technological one. If people could develop better technologies that actually flip the equation so it's actually in people's self-interest to not pollute or to pollute dramatically less, then that also would leverage self-interest to solve the problem. And my understanding is that Germany, for example, funded a lot of green technology, which has helped to some degree. But you could imagine, for example, prizes if the government puts massive amounts of money up if someone can invent a technology that has certain specifications that helps with climate change, and things like that. And that kind of prize, because it's only paid out if someone actually achieves the goal, could actually be a relatively cost-effective way to try to incentivize that. Those were the only two strategies that really seemed viable to me as huge changes. But then talking to you, you actually made me realize that there might be a third strategy involving corporations. Can you tell me about that? I thought that was fascinating. First of all, I want you to tell me about this third strategy that you taught me about involving corporations, which I think is super interesting. But also, I want you to critique what I just said. Do you agree? Do you disagree? What do you think about what I said about these two large strategies?

CASSANDRA: I think the two large strategies — the political and the technological strategies — are correct. But I think if you just leave it at that, and if you don't unpack it further, then it feels like a very hopeless problem.

SPENCER: But there might be a lot of actual sub-steps that different groups can work on. They don't have to solve the whole problem unilaterally independently.

CASSANDRA: Exactly. I would love to unpack it a little bit. For instance, the first umbrella of political strategies — and I think you said, like,"Oh, if only we had a president who was mission- aligned" — but I guess there are some really promising things that are happening on the political front which give me hope. For instance, I learned about this great organization, Run For Something, which gets young progressives to run for office. And the interesting lever here is that apparently, 40% of state legislature seats go uncontested. So there's one person from one party running for the seat, and they don't have an opponent.

SPENCER: Really? I didn't know that.

CASSANDRA: Yeah, so that's a very interesting lever because if we pull this lever, we can feed the state legislatures with more young progressives. From the state legislature, they go on to Congress, and that's where we need them. Run For Something is a super efficient organization. Some of the statistics I found impressive were that, I think, on average, $10,000 to Run For Something gets a new person elected.

SPENCER: What? That little? Oh, my gosh, that's shocking.

CASSANDRA: Yeah, it's very efficient compared to the 'get out to vote' campaigns. The numbers I've heard on that side are, it takes about $300 to flip a vote.

SPENCER: Are there other political strategies that seem really interesting to you?

CASSANDRA: Yeah. On the non-elected front, there are public utilities commissioners. There are a few hundred across the country, and they are responsible for allocating trillions of dollars into new power plants. And one activist I found super inspiring, Hal Harvey — I think he's a mechanical engineer or engineer by training and now is a climate lobbyist — and he goes to these public utilities commission hearings when they're deciding what type of power plant to build, and he'll go out and recruit a hundred mothers of asthmatic children to show up to these hearings with him, and kind of pressure them into building new green power plants. Reading about how Harvey's strategies — not just this one, he has many more — gave me hope that one person can really make a difference in the climate space if they're very strategic.

SPENCER: Tell me now about some strategies in the technological space.

CASSANDRA: Yeah, you mentioned Germany. Germany was able to drive down the cost of solar panels for everyone across the world by putting massive amounts of funding into mass producing these. There was no new R&D breakthrough or anything. Just the act of mass producing solar panels with the existing technology drove down the cost of solar panels by a few orders of magnitude. That is actually the same strategy that's used by the French nuclear program. It's not as though the French have much lower nuclear costs than we do here in the United States, and it's not because they have any fancy new R & D. It's because they picked a reactor design and mass produced it. Whereas here in the United States, we try one reactor design, and then we try another reactor design, and we never reach those economies of scale.

SPENCER: That's interesting, because people might think, to make progress on tech, you have to really innovate and develop something new but maybe you can just do the classic thing of just building more of them and then the costs tend to go down automatically.

CASSANDRA: Yeah, actually, the people in the climate space, the researchers that we've been talking to say we already have a lot of climate tech. Yes, we all need more climate tech, but we haven't even deployed the existing climate tech that we have on the scales that we can deploy it.

SPENCER: Why don't you tell us a little bit about why there was such a problem with getting investments to work in climate change, and just generally in the clean energy space.

CASSANDRA: Right. Most investors in the climate space are trying to maximize their return on investment. I feel like we can apply these same principles but also through a philanthropic lens. For instance, effective altruists try to maximize their impact per dollar donated, and I feel like in the climate space, we do need a hybrid approach, because a lot of these startups or climate R&D research are not necessarily traditionally profitable, especially in the early stage. So climate investors don't want to invest, but they're also too foreign for the philanthropic space to fund, so we need some sort of hybrid approach. I've been pretty surprised by my encounters with investors in the space. Not all investors are like this, but the vast majority of climate investors are in it just for the money; they treat it just as any other investment. I feel like there's a great opportunity for some sort of hybrid investing philanthropy approach. And I think that's a really interesting area to poke because, currently in the climate space, if you're a founder, investors will only be interested in working with you when you look profitable. I guess that makes sense [laughs] from the investing domain, but we only have one planet. Are there ways that we can fuse investment and philanthropy in order to make still efficient, effective decisions, but aren't purely on a single axis?

SPENCER: What do you think of the prize model that I mentioned previously, where prizes could be offered for really significant breakthroughs?

CASSANDRA: Yeah, I definitely feel like prizing — and there are other levers we can pull, other intrinsic motivators for people, like prestige or things — whatever gets the job done. You've told me about this in another context, but I found your prizing and your exponential grant idea very compelling, if we could share that with the audience.

SPENCER: Yeah. Basically, the idea is, for clearerthinking.org, we wanted to see if we could help other people build programs like we do. In other words, we build lots of programs for decision making, habit formation, cognitive biases, all kinds of things like that, that we launch online. But we wanted to say, "Well, can we leverage other people to build these kinds of programs that then can be released to the public and hopefully benefit people's lives?" And so we created a particular system that we call our micro-grants program, where there were three stages. The first stage is, you come up with an idea for a program you want to build, and you submit it. It's a competition and you win a little tiny bit of money. And if you're one of the winners, then we invite you to stage two. In stage two, you have to write a detailed outline of what program you might want to create, and why you think it's worth creating, and if that's approved, then you win a bit more money. Finally, you're invited to stage three, where you actually have to go try to build that program. We actually created a technology called GuidedTrack to help these people make interactive learning experiences. All the programs on clearerthinking.org, our website, are made using our software, GuidedTrack. The applicants have to go learn GuidedTrack and try to build their own program. And then we would give you feedback from two of our team members on your program to try to help you make it better. And additionally, we would use another system we made called Positly — which recruits people for studies — to recruit at least 25 people to actually go through your whole program (just regular people) and give you feedback and critique it and tell you what they liked and didn't like. And so we put them through this honing process. And then at the end, they would have all this information about how to make it better, they'd incorporate that information, and that will lead to a final program. And what was so exciting to us about this whole system was that it seemed to work just phenomenally well, to help people succeed at making the thing they wanted to make. We ended up accepting (I think it was) 16 submissions as winners at phase one. And then we ended up with 14 of them going through all three of these phases and actually producing a program that implemented their idea, and they were really high quality. A lot of them were really excellent. It was just really exciting that we figured out this more scalable way to leverage people's desire to make something and put it in the world. And they were able to win these prizes along the way. They weren't large sums of money, but it was a carrot that helped motivate them. And also, we've provided deadlines by saying, "Okay, your stage one submission has to be at this time, your stage two at this time, and so on." And we provide this feedback loop so they couldn't just build a thing in the abstract. They were getting feedback the whole time to help them hone their idea. It's like trying to turn it from a shed into a cake but through a systematized process.

CASSANDRA: I find that so compelling. It's so hard to make things and you've come up with a kind of secret framework or secret recipe for getting people to produce things on a deadline, that are high quality and that they're motivated to do.

SPENCER: There were some really cool ones. One of them was on active listening, helping you be a better listener basically. Another one was on suicide risk, where they actually ended up consulting with three different experts on suicide risk, which I thought was really cool. One of my favorites was on memory biases where you actually go through this program which tries to insert fake memories, and at the end, it sees if it fooled you and it teaches you about how our memories can be biased. That was really fun.

CASSANDRA: Wow. So what are the ideas that you think we can take away to get people working on climate, or working on their passion projects, or whatever they want to do?

SPENCER: Yeah, I think a few things. One, social accountability is incredibly powerful, knowing that there's a human that's expecting you to do a thing by a certain time. Second, deadlines being part of that, are just so useful. And the way that we did deadlines in that program, is we said, "Okay, you need to submit by this day. But if you can't make it, let us know and we will work with you." And then if someone said, "I can't make it by that date," we would say, "Okay, what date can you commit to?" We could move the date but then they had to commit to another date, so it wasn't just, "Oh, now just submit it whenever." They always had a deadline and I think that's a really powerful and flexible model, where the deadline can be moved, but you must always have a deadline. A third thing, I think, small financial rewards can be very satisfying. Even if they're not enough money to move the needle, it feels like a prize and it feels like something that you can be proud of, to try to achieve. And so I think that's also really powerful. And finally, creating a systematic feedback loop — like we were talking about at the beginning of this conversation with cognitive walkthroughs, and pretotyping, all these different ideas — trying to make that systematic so that anyone can apply it. And you can't just develop something in a vacuum; you're gonna have multiple rounds of feedback built into the system.

CASSANDRA: Yeah, I guess another idea that I noticed in your design is that, when the creator does one thing, then you scale it up for them, you run the user study. They do one thing, and then something else happens in a very scaled and empowering way.

SPENCER: To wrap up, let's just talk quickly about the kind of corporate-based strategies to try to improve global warming. You wanna tell us about that?

CASSANDRA: I wrote a piece on my blog called, "Wall Street is pitching in for the climate crisis," and it talks about two separate ideas. The first idea is divestment, which is when activist investors sell off their shares in polluting companies in an attempt to publicly distance themselves from these companies and drive the stock price down and make these look like less attractive investments. But it also makes sense from a purely selfish perspective because, if we truly believe that oil is going to be phased out or greatly reduced by 2050 by the Paris Agreement, then these are bad investments that you should not be holding anyways, if you're a rational player in the market. But the criticism for divestment that some people have is that divestment — back to the theory of change — doesn't actually produce the intended effect in the market because if you sell off these shares, then a less scrupulous investor may just come around and buy at a discount. I'm not totally sure how I feel about it because, on the one hand, that's true, someone else is buying the shares that you're selling off. But at the same time, if you are a large pension fund and you are holding shares in fossil fuel companies, then it's impossible for regulators to go take action against fossil fuel companies because now you're hurting so many different people's pension investments. So I do feel like the ethically responsible thing is to divest and not be taken as a financial hostage there. But the other idea that really complements divestment well is the idea of engagement. Activist investors go in and buy shares of polluting companies, but then influence them from the inside through shareholder resolutions. There is a coalition of investors called the Climate Action 100+. They have $40 trillion under management — which is huge — and they've been able to get some pretty interesting concessions through this type of shareholder activism. For instance, they've gotten BP to agree not to invest any new capital expenditures — like not opening up any new oil wells — that would not be profitable under the Paris Agreement, which is kind of huge. They're reining them in from the inside. Some of the other steps that they're taking are applying pressure to appoint people to the boards of these companies and align board member incentives to sustainability measures.

SPENCER: One thing that you mentioned to me that made me think that this kind of approach might have a lot of promise is that there's a huge concentration of which companies actually pollute. Do you want to just mention that?

CASSANDRA: Yeah, so the statistic is that 100 companies throughout the world are responsible for 66% of world emissions. And this is coming across the sectors of mining ,oil, gas, transportation, utilities, industrials and consumer products.

SPENCER: I find that really fascinating because, if you think about the two big strategies we mentioned before — one of them being political change, and another being technological change — this provides a third lever, which is that if you could influence these 100 companies, then maybe that could have a huge impact as well.

CASSANDRA: Yes, totally. Especially if the structures in place at these companies are similar, then the changes that you enact with one can easily be scaled to the others.

SPENCER: Wonderful. Cassandra, this was so great. Thanks so much for coming on.

CASSANDRA: Aww, thank you for having me, Spencer [laughs].

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: