CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 148: Is giving people a sense of agency better than giving them cash? (with Richard Sedlmayr)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

March 9, 2023

Can giving people a sense of agency and dignity be better than giving them access to food, shelter, clothing, or cash? And what exactly can be done in practice to expand human agency? How does the value of agency-oriented interventions compare to the value of more tangible interventions? How robust are the findings about all of the above in light of the replication crisis? In general, how much confidence should we place (with or without the replication crisis) in the findings of social science research? How tight should the feedback loop be for organizations that do both research for and implementation of charitable interventions?

Richard Sedlmayr works with a private foundation called the Wellspring Philanthropic Fund, where he funds research and innovation to promote pro-poor economic development. He is also involved in the setup of The Agency Fund, a philanthropic partnership investing in ideas and organizations that support people in the navigation of difficult lives. Richard's background is in behavioral, development, and financial economics, and he has a PhD in Public Policy from Oxford. Richard has lived in a dozen countries and is currently based in the Bay Area. You can get in touch with him on LinkedIn.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you joined us today. In this episode, Spencer speaks with Richard Sedlmayr about human agency, information interventions, and funding ecosystems.

SPENCER: Richard, welcome.

RICHARD: Thanks so much for having me, Spencer.

SPENCER: So you've done a lot of work on how we help people, not necessarily by giving them food or shelter, but by giving them agency, which I think is a really interesting idea. And I know you also have some fascinating findings on the reason to think of agency as an important intervention point. So, I'd love to start by just asking you, what do you mean by human agency?

RICHARD: Well, I define agency the way most people would, something related to the human capability to live with a sense of purpose and self-determination, and to have autonomy and dignity. I guess that's a pretty mainstream definition that's very aligned with how we think about that. But also, it's worth saying that this is something I came into, over a long period of time just trying to support research really, as a funder and philanthropy professional, on any number of ideas that people have come up with for how to reduce poverty and tap into human potential.

SPENCER: To me, one of the interesting things about it is that you didn't start there. You didn't say, “Okay, my goal is to increase agency.” You started with goals of helping people, and you converged to an agency-based approach. So I'd love to just hear some of the story about how that actually came to be.

RICHARD: So I've been very involved initially, a little bit as a manager, and then as a researcher, as well, but mainly as a funder in what's called the evidence-based movement in development, which has been around for some two decades or so. And basically just tries to subject all sorts of ideas in the development space to randomized evaluations, to basically figure out what works and what doesn't work in development. The idea is, in a sense, borrowed from clinical trials, where you test biomedical interventions, and you test things, in development, things are widespread — say something like a mosquito net — you subject that to experimental trials in very much the same way. So you randomly allocate mosquito nets to some people and not to others, because you do have a financial constraint. And then you see what is the impact of having access to mosquito nets or not. And so that idea, you can really test any micro-intervention using randomized evaluation. And you can subject micro loans or different forms of training or whatever it might be (intangibles) to the same thing, really. But one thing that's become increasingly clear — I don't want to speak for the movement, but certainly increasingly clear to me and to my colleagues — is this analogy, like this translation from biomedical to poverty and economics isn't super clean. There are some differences here.

SPENCER: Do you have an example?

RICHARD: Yeah. So if you think, for instance, about when and where and why a mosquito net works. It's not so hard to see how, with some research, you can wrap your head around the necessary conditions. So there needs to be a parasite, some people need to be carrying that parasite, and there needs to be mosquitoes to carry the parasite from one person to the next. And if people in those conditions sleep under mosquito nets, they might be protected from that. So you can test that hypothesis and get to a fairly complete understanding of what's going on. If you subject economic ideas to that approach, you can test impact just as well. So there's the set of necessary conditions, the things that you would really need to know, in order to be confident you understand everything that's going on, is just a longer list. (If that makes sense.) So, it's not just that there are mosquitoes and there are parasites, but whether or not training is right for somebody, or whether somebody should take a loan, borrow more, save more, fix their roof, or invest in their kids. So that is a very personal trade-off that comes with a lot of contextual baggage. Does that make sense? So, some people are in one situation, some people are in another situation, we call that heterogeneity. And some of that is visible, and some of that is hidden somewhere. And it's very, very hard to wrap your head around everything that you would like to know in order to really be confident that something is ‘the right course of action.'

SPENCER: Maybe an example here would be interventions that try to help people become entrepreneurs in the developing world, because what being an entrepreneur means is gonna vary wildly. And what someone needs to get started is gonna vary wildly. Because being an entrepreneur could be anything from weaving baskets and selling them to making bricks to just so many different possibilities. So, to say, “Well, is it good to give people money to become entrepreneurs?” There's no answer to that. You have to look at the details of the case and try to understand what that means in that context. Is that fair?

RICHARD: Yeah. I think that's fair. Also, when you're saying, “Is it good to give people money?” I think the idea of cash transfers obviously tries to address some of this concern about all this heterogeneity, because you can say, “Well, I don't know everything that's going on in your life, I don't know if you should be weaving baskets, or making breaks or going to school or whatever. So instead of sending you to a school for mechanics, I'm going to give you cash, and you'll figure out for yourself if you want to go to school for mechanics, or do something else.”

SPENCER: I was referring to interventions that would be like earmarking the money we have to spend this on becoming an entrepreneur.

RICHARD: Exactly, paying for training or something. So that's right. I think the very idea to say, “What is the training that people in poverty need?” is just a nonsensical question in the first place. But also, if you want to sort of be more nuanced and say, “Well, under these and those conditions...” No amount of research will give you the answer, so to speak. And so, to get back to your question on agency. I think, as researchers, there are some things that you can know, learn, and figure out, but there are a lot of things that you won't know. But there are people who do know, or the people who are affected by who you're trying to help, essentially. And so, there are different types of knowledge that come into play when you think about human potential. And so, I think when it comes to important economic and life decisions, there's hardly ever — I'm overstating a little bit, maybe — but it's very, very hard to find the right course of action that somebody should do, somebody should engage in, without really having deep personal contextual knowledge about that person and their context. And so, I think, my interest in agency stems from that. When you look at the world, there's thousands of social sciences, including just development RCTs now, and you look at the ones that are mostly...

SPENCER: Do you mean randomized controlled trials, right?

RICHARD: Right, like randomized evaluations. When you look in, sort of squint, and you try to make sense of where the really exciting things are happening? Where is a lot happening with little? It is often the case that the intervention engaged people's consciousness that didn't decide things for them or see them as some instrument to get impact out of, but tried to just, at the margin, support them in the navigation of their own decisions. So, really coming from this utilitarian world, we think about what's the best use of funding, but coming around to this insight that if you just think in that way, you might miss out on some really promising potential uses of money.

SPENCER: So let's go into some examples of the approach you're using that involve agency or conscious awareness on the part of the person receiving the aid.

RICHARD: So yeah, these things often come in two flavors, you can say. And they're coming out of two sources of literature, some in economics, and some in social psychology. So for one, there's a well-known study by researcher Rob Jensen, who did some piloting in schools in the Dominican Republic to try and understand why some students would drop out of high school. And just based on the insights that he gathered from that, he designed an intervention that basically just informed students about the returns to finishing high school. So, it collected data on the incomes of people in the Dominican Republic who do and who don't finish high school, and shared that with students. And lo and behold, there were huge impacts on dropouts and on educational attainment. So, it didn't trick students into doing something that you thought was the right course of action, it just offered them a piece of data that day-to-day life might not have exposed them to. So, you could venture to why might that have been the case. So now I'm firmly in the domain of speculation, but it's not hard to imagine that if you grew up with a low socioeconomic status in the Dominican Republic, and you thought about potential paths to success and potential role models, that you might see people who are very successful in sports or in entertainment who didn't need to complete high school. And you might somehow infer that there's no point in finishing high school. But in fact, there is. And so, simply sharing that data — this is a cold approach, [chuckles] you could say — but sharing that data with people as though they were just agents who are trying to optimize as best they can with the information they have. This is almost the seminal study that's gotten a lot of people interested in information as an intervention, and particularly customized information that's actionable to you, that's relevant to you.

SPENCER: One thing I find really interesting about that is that, it seems usually information interventions don't work, or they have a really small effect size. And so, for things to go so right in a study like that, it seems like you have to know just so much about the context there, like you have to know, “Okay, students actually already have a misconception about the value of high school. They don't think it's going to have much impact on future incomes.” Additionally, you have to know that they really care about their future incomes. And furthermore, you have to know something about their ability to perform in high school, because maybe they're not performing well in high school or dropping out for things out of their control. Increasing motivation wouldn't help. And so, you have to have this series of things that you know to be true, in order to predict that it actually is going to work. Whereas, if you just pick a random piece of information that you think someone should know, probably the things won't be true, and it's not going to do anything.

RICHARD: I think you're right that there's a lot of value in doing the homework and really trying to understand the context you're dealing with, and trying to unearth things that are truly new and different. But you're also saying something that I've heard many times — and I wonder where it comes from — this notion that, “Oh, the impact of information interventions is generally low.” And I don't quite know where that comes from because in cost-effectiveness terms, my read on the evidence is that many of the way most cost-effective things that have ever been demonstrated have essentially been information or sort of intangible type interventions. And so, that's obviously, in good part, because they can be so extremely cheap. I don't know if you want to expand a little bit, or if that was an off-hand comment, or something you actually feel you have a strong position on that information interventions are not generally an impactful or powerful thing, because I hear it often, and I don't quite know where it comes from.

SPENCER: Yeah, let's unpack that a bit. So there's a few different pieces to this. One is the intention-behavior gap, where a lot of people know they should exercise, they know they should eat healthy, they don't do it. They don't do nearly as much as they intend. So I think there's this whole class of behaviors where we know that information isn't the problem because tons of people get the information and then don't do the thing. They even say they want to do the thing, and they still don't do the thing. So that class of things where we know that pure information don't seem that effective. I would also say that from the nudging literature, my understanding is that the effect sizes for information tend to be pretty small. That doesn't mean they're not cost-effective. Because let's say you have an information intervention that moves people's behavior so that 4% more people do something. At a large scale across the country, if that's a cheap intervention, that can be hugely valuable, if that 4% means a significant change in those people's lives. And that may cost almost nothing to provide. Like, if you send a text message, it costs almost nothing. So, I would distinguish between effect sizes versus cost-effective. And I think that's an important distinction. So yeah, I am curious to hear your thoughts on those things I just said.

RICHARD: I would agree with everything you just said. There are a few kinds of information type, sort of single dose items of information type interventions that are life-changing. So yeah, I would agree with that. But obviously, these things can accumulate. If you get involved in that world of trying to build systems that provide people with useful information, you don't want to just provide one discrete data point and kind of dismantle that system again, and call it a day. When you're talking about this question of, “When is information the constraint?” This allows me a little bit to get back to the second type of intervention that I was talking about, that maybe comes more out of the social psychology literature. Because to an economist, information is just information. You have it or you don't, or whatever, maybe you believe it, or you don't. There might be that dimension. But to a social psychologist, there's just a lot more to that. When you receive a stimulus, you make meaning of it somehow. And where it comes from might not just change how much you trust it, but also how you interpret it, how you find that information in your library of mental models, and how much it might change your perspectives. So there, it might help for me to just — when I was pointing to these two flavors and these two types of interventions — go back to a second type of class of interventions (you could say) that relates more to social psychology. And those are interventions that, in some shape or form, really try to not just offer people a piece of data, as though they were these quasi-rational agents who are trying to optimize, interventions that try to engage with how people make meaning of the world, how people shape their identities, their narratives, and their beliefs. And so, a simple illustration is, there's a researcher called Emma Riley who invited — there was this movie coming out by Disney called the Queen of Katwe, I don't know if you've heard of it. It's about a Ugandan chess prodigy. That is a true story about a girl who grew up in the slums of Kampala, in Katwa, and became a well-known chess player. And so there was a Disney movie made about her. And so the question came up, “What might that do if girls in Uganda watch that movie?” — a few thousand girls to the movies and randomly allocated some tickets to watch that movie, and then some tickets to watch another less inspirational movie that seemed like it would probably be less relatable to the audience. And so, that too, had huge effects on educational attainment and performance, especially on math, especially for lower socioeconomic status girls. And so you can see, what was the information there that was being transferred? Is it just the cold fact that there was once a girl from Katwe, who became a chess prodigy? Is that fact what changed girls' minds about their own capabilities? Or is there a little bit more to that? It is not just the data that somehow going to the movies allowed girls in particular, not exclusively, allowed them to reimagine and reconceive their own narratives and their own beliefs about their own potential. Does that make sense? There are these other types of ideas that don't just see information as data that you do or don't have, but really think about how certain stimuli engage with your existing mental models or existing conceptions. And so, whether or not you have access to something or not becomes a very blurry, blurry concept, I think, to a psychologist.

SPENCER: Right. Because there's a lot of other variables going on there besides information. One could be motivation, one could be beliefs about the self, one could be just things you can imagine, maybe now you can visualize yourself doing something in math and science that you couldn't before. So I think information is merely one piece of that.

RICHARD: I think that's right. But if I have to summarize how I currently think about the grand theory of how these things fit together in my mind — like, information, beliefs, and motivation — it is something like the following: So, all humans try to navigate the world with some deliberation, but they don't navigate the actual physical world. They navigate the world that's in their head. And that world is composed of certain schemas, beliefs, and some social constructs that it's like a stock of who you are, the stock that's shaped by your experiences. And that determines, basically, how you would engage with your opportunity sets. And that determines what you see and what you don't see. And so any stimulus, whether that is a piece of information, a role model, or an intervention, whatever that might be, to the extent that it works, it works by updating your mental models about yourself, your surroundings, your future, and the world. And so, the agency-type interventions that, I think, are particularly that are going to be the most interesting ones, are getting even ones that speak to mental models that are particularly important than constraining, that maybe relate to particularly important opportunities that life may not have made salient to you, or particularly constraining beliefs that may have come out of a long and difficult history of living and grinding poverty. So that's how I make sense of everything (so to speak) that relates to these, whether it's information or media or other types of interventions. I just think of them as shaping people's mental models.

SPENCER: How important is the timing here? Because it seems there's certain critical periods where you have to make certain types of decisions like, “Do I stick with high school?” or “What kind of career am I gonna pursue?” or “Where am I gonna go get medical care now that I'm sick?” And so I'm wondering, how important is it that it is delivered at a certain time window?

RICHARD: This becomes really important and interesting when you start to think about operationalizing these things. So when you think about how you might unlock development impacts through these ideas? Then it becomes incredibly important to think about the life journey of different people and the different key decisions that they face, and certain pieces of data or insight that might be missing. So you sort of imagine, if you want to start to think about where to get involved, you can think about your own life journeys, other people's life journeys, and realize that, “Well, at some point, you have a decision to make. There might be some action you may want to take. Maybe you want to start studying for your SATs, or something.” And high socioeconomic status kids will tend to be confronted with that idea naturally through their environment. And maybe lower socioeconomic status kids might not. And so, there might be something important here on the path to the decision of when to start preparing for college, that might be something worth thinking about. Or when you think about — something we, at Agency Fund, think about a lot recently relates to — the decisions around pregnancy. A lot of maternal and child mortality takes place just around the time of childbirth. There's a number of important and potentially new decisions that a person has to take during their first pregnancy, in particular, that they might not have been confronted with. And some of those are maybe not entirely obvious. So it turns out, for instance, that where you give birth seems a fairly high-stakes decision. Simply put, if you give birth in a place that doesn't offer C-sections, that doesn't have surgical capabilities, that's a major, major risk factor. But your life may not have exposed you to that truth or to that insight. There are existing organizations out there who already are involved, who are already onto this idea of working with the government to contact millions and millions of women, especially in low-income countries, who have maybe had their first antenatal visit, and offering them counsel and guidance, and help lines for their pregnancy journeys. So that already exists. And so, where the rubber hits the road, where these kinds of agency ideas start to really become interesting and compelling as development interventions, is when you think about things like where and how might we be able to gather and scrape data on facility quality, and on surgical capabilities, and make sure that the advice that people are getting is not just generic advice like taking your vitamins, but specific actionable advice about like, “There are five facilities near you, but only one of them offer C sections. So, to the extent you can maybe consider giving birth there.” That is an example of an insight that would be incredibly interesting to dig into.

[promo]

SPENCER: So one thing I'm wondering about, you said there are a couple studies that had really compelling results, really high effect sizes for interventions that in some ways are very simple like showing some a movie. And I imagine, one thing my listeners might be thinking about is the replication crisis, the fact that, in a number of areas of science, many papers don't replicate. So I'm just wondering how you think about interpreting these studies in light of the replication crisis, where increasingly people are becoming aware that a lot of studies aren't replicating in different areas of science? And does that make you more skeptical when you see studies like this? Or do you have ways of navigating around that? I am curious to hear your thoughts.

RICHARD: Yeah, something I have thought about quite a lot. I think the replication crisis is very suited to hot interventions and ideas like the ones we're discussing here. But it's a particularly large problem, if you think of them in the context of the existing contemporary norms of social science and psychology, where people would write papers that deal with complex issues and present certain findings that they have encountered, in a way that you might interpret as being presented as fundamental or basic truths, when in fact, they're just things that happened to be found here and there, based on specific contextual factors that happened to be present in this one setting. So, if you think of these interventions as kind of, “What is the thing that we should do? Should we be providing this or that type of data to this or that type of person, and then do that forever?” And the way that we do that is we get a bunch of researchers out and have them find the intervention that works (so to speak). And then once we have the thing that works, we push that out. Then, yeah, I think that's very vulnerable to things just not replicating well. But I think if you want to be engaged in the kinds of things that we're talking about here, you want to have a much tighter loop between research and action than I think we're used to. And I think this idea that most of the insights go through publications and papers may not be such a useful analogy. So I think it's very useful for people like what Rob Jensen did. So to demonstrate that, “Look what I did. This is a new way to think about development impact.” But I don't think that just having a bunch of a hundred papers like that, if people give this or that information, then gives you the suite of the things that you should do forever. And so rather, the ideal ecosystem that you would want to have in order to be able to do these kinds of things is to set up organizations that specialize in certain decisions that people in poverty need to make related to the things that we just talked about — such as education or pregnancy — and try to get better, and better, and ever better at communicating with people in a productive and useful way. And so from time to time, you want to establish the impact of what you're doing. But much of the time, you're also just iterating and learning and improving. (So, I don't know if it's obvious where I'm going with this.) I think part of the source of the replication crisis is an unrealistic expectation that people have about the capabilities of social science to answer things with a degree of certainty, that isn't warranted when you work in these kinds of complex environments, where there's all this heterogeneity and all these moving parts. Does that make sense?

SPENCER: Yeah. I want to distinguish, though, two different things that we could mean when we are talking about the replication crisis, because I think I meant one, and you might have meant another. The first (which is what I was referring to) is someone tries to faithfully do the same study on the same population or almost identical populations, trying to make it as close as possible to the original, and doesn't find the same effect. The second is more of a generalizability issue, where the effect is real on that exact population in that exact setting. But if you try to move it to a slightly different setting, or a different population, it fails to generalize. And I think maybe you're explaining more about the generalizability. Is that correct?

RICHARD: I'm not sure that I was specifically speaking to one or the other. But if we talk about the history of the replication crisis, you have a bunch of researchers producing a bunch of papers. And then, these things are being replicated and often not having the same kinds of effects that originally were found. But I guess what I'm trying to get is that, it shouldn't necessarily be too much of a surprise. Because if you don't think that there were nefarious things going on, like the falsification of data, or all this pee-hacking or all this. And you just take a study and you say, “I'm going to try to replicate it with the same population elsewhere” — let's say that Dominican study we were just talking about — then what seems like the same context may not, in fact, be the same context. We're getting back a little bit to the original, like, “Where this interest in agency comes from? Where does the appeal of an agency lens comes from, in the first place?” There's a lot of subtle differences that might be governing, that might be moderating (so to speak) whether or not an intervention works here and there and elsewhere. And a perfectly honest researcher might test it in one setting and find this. And then, it might be tested in another setting, and it might not work. Superficially, that doesn't indicate that necessarily there was something nefarious going on. It might be that there's just hidden moderators that we haven't identified. And so it's more like an appeal of, I think, the agency agenda rather than a weakness, in my view. Because in that world, you don't even presume to understand everything that's going on. You're not presuming to know everything that might govern human decision-making. But you just focus on specific areas where you think there appears to be a specific market failure (so to speak) in the generation of a specific piece of information. You just try to focus on that, and gradually get better and better by testing and iterating, and just providing that little sliver of insight and information. So that's not a world, I think, in which there's just researchers in universities running a bunch of randomized evaluations to find the final suite of the correct set of interventions. That's more a world in which there's different organizations focus on trying to provide useful counsel on a specific issue, and using data in a way that allows them to get confident that they're onto something, but not so confident that they know that this will always necessarily generalize beyond the settings that they've tested so far.

SPENCER: Maybe a point of disagreement between us is that I think a lot more papers are actually just statistical noise due to pee-hacking than you do.

RICHARD: I think I would agree. I think that the research insights that motivate this are more, I think, demonstrations of what may be possible, rather than a representative sample of what you should expect if you tested information interventions 100 times. On average, this is what you would find. It is what's been found in research papers. That's not what I'm implying. I would agree with you that you might be just seeing the top of the iceberg. And a lot of failure might be associated with actually trying to make these things work. But that doesn't mean that they can't be made to work.

SPENCER: Let's talk about this iterative approach that you mentioned, because I find that very fascinating. The idea that you could roll out an intervention, and that's not the end of the research process. It's not like you do a study, and then you prove the thing works, then you do the intervention. It's like, “No, you roll out an intervention, and you're learning as you go.” And this is very standard in product development, like in the startup world. The idea that you're gonna roll something out, and you're gonna keep iterating and learning and it's not necessarily going to be a formal, randomized controlled trial — it's usually not — but you are learning and iterating. But I'm wondering how you think about incorporating learning in the process of rolling something out in a way that ends up being robust, where you actually can tell that you're achieving your goals?

RICHARD: It's certainly a challenging question. It's not like I have all the answers. I would say, I'd share your fascination. We can talk about this a little bit but I don't want to imply that I know necessarily what can and can't be done, where the research and policy world might go. So a status quo in which researchers do research and then you find the evidence (so to speak) on what works. And then, that is translated into practice without further research, because we already know presumably, that this is how these other things work. That's not a terribly useful approach when you're working with these kinds of complex interventions. And it seems a more useful approach to take this kind of lean learning angle. You implement, you're trying to do your best, in the process you're learning something. And that allows you to, hopefully, be even more impactful in the future. And so ideally, I think if you're involved in these kinds of interventions, where you are going to have a lot of iterations of failure, you're not going to ever know exactly everything that is going on, I think you just need to be lean in this way. And I'm saying that not just because of some grant diagnosis of this is a sector that should be totally different than it is, but also observing that there are organizations popping up that are working in this space — in the space of what I call agency; other people might give it different names. — But they try to offer large numbers of people, farmers, students, teachers, or whoever they may be, parents, useful insights and guidance. They don't just scale something that's been found at some point to work. But they are research and action players. They work at that intersection. They implement, they execute. I'm thinking of organizations such as Precision Development might be an example, or Youth Impact might be an example. Researcher-led organizations that just do research and implementation at the same time. It's not so surprising that it is a sensible approach for how to deal with these tricky issues, to deal with it the way that maybe a tech firm might that keeps testing and running, and trialing and improving, and iterating to try and get gradually better, rather than to just take this one closed-end thing — like the analogy to the mosquito net — and just roll that out to everybody. So, I do hope and expect, that's what I'm spending a good chunk of my time on now, is trying to cultivate an ecosystem of organizations that work at the intersection of research and action in this way. They are part implementer, and they are part research organization. And they need both of those muscles. I'm reminded of the work that you're doing, as well, at Spark wave.

SPENCER: I'm really interested in this model of building studies into the work of distributing something, or building randomization to that process. So one example where we've actually applied this is with our app Mind Ease for people with anxiety. We actually made it so that there's some randomization occurring in some cases, to figure out which parts of the app are working well. And then we're able to take those learnings and apply it across the other users. And so, that never-ending randomized controlled trial in a sense that's occurring, but of course everyone benefits from that, and also gives us this feedback loop.

RICHARD: Yeah, right. I haven't worked in big tech, but I think that at any given point, Google has 100 hours of basically randomized evaluations going on that they would call A/B tests, where they might change the font or the color or do some subtle change in the rollout of their product, and gradually gather the feedback and hone the product that way. So there's an interesting question of, “Mght this process, building randomization and testing into implementation, be something that you can learn more from, than just questions around font and color? Are these things that you can learn very fundamental truths about (well, certainly) the problems that you're trying to tackle as an implementer?” And then, might it even be that this could be a very useful tool for science and generate more generalizable knowledge? I think that's a fascinating question.

SPENCER: Yeah. One issue that Elizabeth Kim, a behavioral scientist who's been on this podcast, brings up is that she finds that companies are often doing these experiments, which they think of as A/B tests, but they don't necessarily have a deep hypothesis behind them. They're like, “Okay, let's try the button being bigger, or let's try making it red instead of blue.” So they are able to optimize. They're able to iterate, but it's not necessarily producing this generalizable knowledge that can then be used to solve problems in the future.

RICHARD: Right. You can imagine three levels of knowledge. So there might be this very trivial uninteresting optimization type knowledge, or should the button that, what should be the first option on the menu, that kind of thing. And then on the other far end, you could imagine that this work can be used to do science, to find generalizable truths about human behavior and psychology and the constraints to prosperity even or whatever. I don't know how far you can get there, but I think there's something in between, where it might be possible to use these experiments to learn more than just button size and font, but to learn really how to run a program that tries to move certain specific social outcomes. How best to do that? In the case of you talking about (Mind Ease), how best to design Mind Ease in a way that is actually an effective anxiety app? And that seems a somewhat lower lift than arriving at very generalizable academic-type truths. And I think that kind of work, that kind of research, has a lot of hidden potential that's waiting for organizations and implementers that have some research mindset, that are able to collect them, generate, and analyze data for them to be able to do really, really powerful work. That's not going to be necessarily limited to just agency-type interventions. But very often — because we're talking about cases where you're probably interacting with users, you're probably using technology, you're probably gathering data, you're basically offering intangibles, very often — you'll be in this world that we're talking about.

SPENCER: One thing I worry about when it comes to nonprofits iterating is that they can iterate into strange, suboptimal equilibriums. So, I'll just give a couple examples of those. One might be that the things that they can easily measure are quite different from the ultimate goals of the program. So, let's take an educational intervention. It's easy for them to measure (let's say) how many children went through their program. It might be hard for them to measure how many children had better lives because of it. And so they can end up optimizing towards these metrics around, “Oh, look. We were able to educate a lot of people and the process went smoothly,” but that doesn't lead to the positive outcome that they hope, because that's not where the feedback loop is. Or another example of this that's even worse is implementing iteration that is just about pleasing donors. And it's like, “Oh, we've iterated to a point where we know how to make donors really happy.” But that's disconnected from actually really helping people in the world. But we now know the things to say, and the slides to show that let us raise a lot of money.

RICHARD: Yeah, very legitimate concerns for sure. I don't know that there's great answers to it. I think there are going to be cases where this iterative learning process is incredibly hard because the outcomes that you're measuring might be hard to collect. They might be very multi-dimensional. It might take a long time for the dust to settle on them and for you to gain confidence. But then, there are other outcomes where that's a bit less hard. And so, there are some outcomes of large social importance that are things that you could certainly see how one might be able to do, that may include to a degree — there's a lot to debate there — but may include to a degree, like mental health interventions. We can certainly check quite quickly whether or not you've been able to make a dent in mental health outcomes, at least short-term ones. Educational data is often already collected in many countries with very high-quality data on dropouts, on standardized test scores. There's a lot of useful data there that, depending on the context — we were just talking about maternal mortality and child mortality — it also depends on the specifics of where you're working, whether you have a setting where you're able to gather data that actually allows you to gain confidence that you're having an impact and gather that in a reasonable way. I think you're right, you're pointing to some key challenges with that agenda. But they're not insurmountable. There are just shades of gray as to how hard it might be to implement these things in different contexts with an eye towards different outcomes. And of course, I think on the upside, when and where you're in a position to do this, the potential to have cost-effective social impact could be enormous. The idea that humans might be so far from their true inherent potential — speaking for everybody — but certainly also for people who grew up in low socioeconomic status contexts. Yes, there may be many outcomes that we can't quite reach into and quite fully understand, but there's quite a few where you can. I think that balances out some of the concerns about how well you are able to do it. You're not going to be able to do it everywhere and on any outcome all the time.

[promo]

SPENCER: Before we wrap up, there are a couple more topics I want to cover. One is about the relationship between what you're doing and the effective altruism movement, and way of seeing things. And the last is to hear about the Agency Fund, which is your new initiative. So, let's jump into the effective altruism question. How do you think about what you're doing relative to effective altruism? And I'm also curious to know, how do you think the interventions that you're funding compared to those (let's say) standard GiveWell charities or cash transfers, things like that?

RICHARD: I think there's a certain burden of proof on these ideas in terms of, “Can you be cost-effective?” But then also — compared to what effective altruism is already doing — assuming that you have some demonstrations of cost-effectiveness, can you also persuade us that this is something that “generalizes” that if there's a lot of hidden variables and a lot of heterogeneity, we don't quite understand what it is, how can we ever possibly get comfortable with funding these things? So that's something I think to work out over time and debate I'm very much hoping to have with effective altruism. I can say my interest in this world actually comes out of a history of involvement with cash transfers. It actually started with a personal experience of just giving somebody, on a personal level, a small transfer to help them out with something and noticing what seemed like a large impact from that small transfer. And so, I was wondering if I'm getting really fascinated with the question of, “To what extent was it the money?” So if that small amount of money that person found on the street, would they've been able to create the same change from that? And it raises the question of, “If cash transfers are given, that doesn't just come with cash, that comes with a narrative, it comes with a certain amount of meaning and comes with, there's an underlying idea here, that somebody is helping somebody else. What does this mean? Where does that come from?” And so the question of, “Might these narratives that come with cash transfers be something that can make a big difference? Might there be a lot of hidden potential?” There's over a billion people living in households that are enrolled in some kind of a cash transfer program. So, might there be mileage in thinking very hard about how to frame these transfers and thinking very hard about how they're perceived? This is actually where a lot of this agenda comes from. And this is something I've had the opportunity to fund quite a lot of research in.

SPENCER: So there is over a billion households that receive some form of cash transfers? It's such a large number.

RICHARD: Not a billion households. I don't think there are a billion households. I think there's a billion people living in households worldwide that are enrolled in some kind of cash transfer program. So that might be something like 250 million households, something along those lines. The World Bank mostly tracks these things. I think what I'm stating is correct. But yeah, I think effective altruism got very fascinated for a long time with the idea of direct cash transfers for, I think, good reasons. But also, it's important to remember that there's actually a lot of people worldwide who are already receiving cash transfers through social protection programs. There's a lot of funding being moved there already.

SPENCER: So these are mostly government programs?

RICHARD: Yeah, exactly. Government programs like the US just went through that with the so-called stimulus checks or something that's people who've experienced that here. These social protection programs come in different shapes or forms. But there's an obvious question like, “If you want to do as much good as possible, should you give out cash transfers?” They should be thinking about how to improve existing cash transfer programs that are already colossal. And so, as very much of the mind that is supporting research and design of cash transfer programs would be a very high returns thing. And so, it's actually something that I had the opportunity to be involved in as a researcher was actually a direct head-to-head comparison and even interaction of some of these ideas around goal-setting and plan-making, and role model type interventions, with cash transfer programs. We do have some data that the returns can be higher, even though that's a modest pilot-scale type thing. it seems to be substantially higher at the margin because, obviously, if at once you have the means of reaching people with intangibles, it gets cheaper and cheaper. So there certainly is some of that data. Of course, now, effective altruism is also changing a bit. I think cash transfers are no longer seen as generally the benchmark to beat. So some people might be talking about 5x cash transfers as the right benchmark. I think that's very realistic for lots of agency-type interventions to hit. But then, there's also a lot of effective altruism now thinking about other things. There's some short-term impacts and things about long-termism, and so forth. So yeah, I'm very interested in having that conversation and having that involvement with effective altruism around the potential of these things, the evidence that we already have, the evidence that could potentially be generated, and also the ecosystem that we've been talking about here. This ecosystem of players who try to get really good at doing and supporting these kinds of decisions at larger and larger scales.

SPENCER: Do you think that agency-based interventions that you've seen outperform (let's say) GiveDirectly-style just giving money to people?

RICHARD: I was a researcher on trial with GiveDirectly. There's some data that I think we have publicized now where we can point to... Yeah. We encountered larger poverty reduction dollar-for-dollar in that trial. So, we do have very direct comparisons here. But then there are within-study comparisons where people would have offered cash transfers and psychological interventions in some shape or form. There's a paper out by Thomas Bossuroy and several colleagues, including Dean Karlan, and Chris Udry, Catherine Thomas, several other researchers in Nature, where they also interacted or added some of these interventions very effectively to existing social protection programs. So, yes, I think there's pretty strong indications that there's dollar-for-dollar quite a bit of mileage that you can get out of these interventions. I'm not suggesting that people put excessive weight on things that aren't yet peer-reviewed. There's certainly a lot to already find out there on the returns to some of these ideas.

SPENCER: What do you think about finding the next set of agency interventions that are going to be highly effective?

RICHARD: What I'm currently involved in is basically just working with different donors who have different outcome priorities, say (I don't know) the Gates Foundation might work on financial consumer protection. And so there's an overlap between that and the Agency agenda. And just think about ways in which those outcomes might be advanced through agency-type interventions. There's certainly a long list of potential applications. Very interested, I said before, in parenting, very interested in perinatal counseling. Also, there are 20 or 30 ideas like that where we've seen some people piloting exciting things that seemed worth investing in. But it's partly a matter of where the funding is. I think most human development outcomes can be moved through these approaches. And so, the question of where to start — so, yeah, it's partly a matter of where might there be large returns. But also, where might there be funding of people who are looking to harness these returns? So it's a bit of both of those things that determine where the focus is.

SPENCER: But say there's interest in an area and you think you can get it funded, how do you decide which intervention to do? Maybe there's 10 or 20 different things you could be doing?

RICHARD: Oh, yeah. But for the most part, there are existing players out there. So, I'm not as out there and proactive as you might think. I work as a funder, so I'm looking out there seeking people who are already working on existing ideas. And who, in some shape or form, are counseling (I don't know) students on their educational choice journeys, or parents in early childhood development or...

SPENCER: But are you looking for the ones that already have a lot of evidence, let's say a randomized control trial showing how effective their interventions are? Or do you have to make guesses about that? And, how do you approach that?

RICHARD: Yeah, there's a lot of blanks to be filled in. I think, generally speaking, the ideal case, of course, is where you have somebody who has an existing organization that reaches millions of people and is ready to run really credible research on what they're able to do. That's going to be the exception. There's going to be gaps on some of these fronts. And often, people are just coming up with ideas with things where they may be piloted something, or they're taking an idea that somebody sent their way and thinking about what they might be able to do with it. You need a combination of the capability, the evidence, and the idea in the first place. Often it just starts with that. We're actively looking for people who might be interested in questions around how might you measure the quality of vocational schools, and make sure that more people are capable of selecting high-quality vocational schools, and steering clear of predatory vocational schools that promise a lot and offer little as a very interesting idea to help students navigate that journey. There's some data to be collected or some evidence to be found. It's not predetermined that this necessarily will work or can be done, but it seems likely that somebody would be in a position to figure this out. And so, maybe in that case, the idea is there, but the person isn't there, and the network isn't there, but somebody might be able to make it work. I think, how can you be confident, or how can you know if something is a good idea or not, if you don't have everything in place? I think you have to just extrapolate a little bit from the constellation of things that you already know. Get a little bit less hung up on this idea that every new idea that somebody might come up with has to be evaluated just on the merit of the evidence that you can directly bring to bear on the underlying hypothesis. I think you can triangulate from other things that we already know, or that we've already seen. I guess that's what it means to be Bayesian. And I think, if you're in that world enough, then you can certainly see ideas come up that maybe don't have all the evidence that you would ideally have, but triangulating from other things that you have seen work, it seems likely that that is a good idea worth trying and testing.

SPENCER: So then, will you fund a small palette first and scale it up? Or fund a study on it? What's your process of building up to greater confidence?

RICHARD: Usually, it would start with something like a project where people would apply for. People who have a certain idea go to basically agencies dot fund, and share that idea. And that's usually then in the form of a time-limited project. If they want to accomplish some discrete time-limited objective of making something work, or gathering some piece of data, or reaching so many people with something that they've worked up to a certain degree of sophistication. And yeah, then over time, the hope is that some of these things can be made larger, but maybe with a caveat that I don't think scaling in this case necessarily means just scaling the intervention, for the reasons that we just talked about. I think, ideally, you would want to scale the organizations that are capable of doing these things and doing them in an ever better and better way. So the end game, I think — again, this is all work in progress — the end game after some successful pilots are more likely to be organizational investments, that don't just invest in a discrete solution or a discrete bot or whatever it might be, but invest in an organization that's demonstrated the capability to make a dent in something. And then, increasingly shift from funding the idea to funding that organization and learning muscles and data muscles that such an organization would need in order to be successful.

SPENCER: Richard, thanks so much for coming on. It's a really interesting discussion.

RICHARD: Thanks so much for having me.

[outro]

JOSH: A listeners asks, "Do you ever get anxious about whether or not to post something on social media? Or do you ever get anxious after you've posted it?"

SPENCER: You know, I don't tend to feel very anxious about posting on social media. I do it pretty regularly and I just think I'm kind of used to it. Occasionally, if my post involves a critique — and I don't tend to critique people very much because not really not my style; I try not to be negative — but if my post contains a critique, I think that will make me a little bit more nervous and I'll kind of be more likely to check it over and over again. And I would also say that, obviously, sometimes I make mistakes in my posts; and so if I make a mistake, that will make me more nervous, and until I get that mistake fixed and update the post and respond to the person who made the correct critique and say, "Thank you for pointing that mistake out," I think I will have trouble not thinking about it.

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: