CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 069: How broken is social science? (with Matt Grossman)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

September 3, 2021

What makes studying humans harder than studying other parts of the universe? Is social science currently improving its rigor, relevance, and self-reflection? Is it improving its predictive power over time? Why have sample sizes historically been so small in social science studies? Is social science actually able to accumulate knowledge? Have social scientists been able to move the "needle" on real-world problems like vaccine adoption? Is social science becoming more diverse? Specifically, does social science have a political bias? Are universities in crisis? Do the incentive structures in universities make them difficult or even impossible to reform?

Matt Grossmann is Director of the Institute for Public Policy and Social Research and Professor of Political Science at Michigan State University. He is also Senior Fellow at the Niskanen Center and a Contributor at FiveThirtyEight. He has published analysis in The New York Times, The Washington Post, and Politico, and hosts the Science of Politics podcast. He is the author or co-author of How Social Science Got Better, Asymmetric Politics, Red State Blues, The Not-So-Special Interests, Artists of the Possible, and Campaigns & Elections, as well as dozens of journal articles. You can find more about him on his website.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcasts about ideas that matter. I'm Josh castle, the producer of the podcast. And I'm so glad you joined us today. In this episode, Spencer speaks with Matt Grossman about the quality and reproducibility of social science studies, biases among social science researchers, and the university system.

SPENCER: Matt, welcome.

MATT: Good to be with you.

SPENCER: So I'm really interested in your point of view on the field of social science. You have this book coming out, where you're discussing in detail all these different ways that social science is claimed to be problematic or biased or has non-rigorous research. And I believe that you see many different forms of progress in these different dimensions. And I wonder, actually, if we disagree about this, I wonder if I'm somewhat more pessimistic about things improving, and so really interested in digging into some of those topics with you.

MATT: Happy to do so. Yes, I feel like I am optimistic about the direction, but I recognize the difficulty of the enterprise. I just think it's been like that all along.

SPENCER: Okay, so you're not necessarily saying that we've solved a lot of these problems, more than kind of our progress is a forward one, and we're kind of moving towards having themselves.

MATT: It's definitely inherently more difficult to understand the human experience in a scientific way than it is for some other parts of science. But these are problems we've long dealt with. And to the extent that we're recognizing them, we're on the way to improving them.

SPENCER: Yeah, I mean, it certainly is true that a particle, you can kind of observe it from the outside, you can control it very precisely. Whereas when you're dealing with humans, you know, every human is a little bit different than every other human. We ourselves are also human. So we're kind of studying ourselves, which puts us in sort of an awkward position, and politics gets wrapped up in it. There's not a lot of politics, at least right now, among particles.

MATT: Yeah, those are the two inherent problems that were longstanding that differentiated social science from other parts of science. But one of the analogies I like from the philosophy science literature is from Adrienne Curie. And he talks about comparing geology and archeology. And just along with what you just said, when you're investigating any particular site, you don't have the full picture. But you can make some assumptions about the similarity of rocks of various types being collected elsewhere and what that tells us historically, but those assumptions aren't based on the practices of geology versus archaeology. Those assumptions are baked into the world, and just aren't gonna be there for the diversity of human experience, or for the way that our interaction with the evidence changes how we view it. And those are long-standing problems that social science, but they are ones that we're increasingly recognizing.

SPENCER: So you say that social science has never been more rigorous, relevant, or self-reflective. So I'd love to dig into those. So we say it's never been more rigorous to tell us about that.

MATT: Sure, we have more data, more methods, more interpretation of evidence through multiple perspectives than we've ever had before. And our tools are getting better our data collection tools, our interpretation of data analysis tools, and our ability to design research that takes advantage of quirks in the world, or statistical advances to come to more definitive, but not fully definitive conclusions. So all of that is improving rigor throughout the major social sciences.

SPENCER: So could you maybe give some examples of this increased rigor?

MATT: I'm a political scientist by training. So obviously, those are the fields I know best. And we are -- although people complain that we might have gotten an election, a very close election of wrong, or might not have been able to predict all of the human behaviors before it happens -- our models of all of those things are getting better over time. So our ability to understand elections, to understand the behavior of individual voters, to understand what is happening in terms of congressional votes and the congressional agenda are getting better over time, that doesn't mean that we can predict anything and everything in advance. But it does mean that they're sort of slow evolution closer to full understanding. And a lot of that doesn't come from the political scientists who are doing it just being better people or something it just comes from, we have data availability, we have more ingenious causal inference strategies, we have the ability to kind of interact between our methods and our questions in ways that we haven't had before.

SPENCER: So would you say that we're now better at predicting what's going to happen in the future in the political science realm?

MATT: It depends on what your goal is, right? If you want to know what's going to happen in the next year. Rational election, we can tell you who's gonna win 350 out of 435 districts very easily, we can tell you what the likely determinants of the final ones are. If you want to know a few years in advance, who you know, is President Biden going to get reelected in a 50-50 country, that's much harder to tell you.

SPENCER: So it just depends on the dynamics of how closest and things like that. I mean, I'm a little surprised by that, because it seems like the 2016 election really took a ton of people by surprise, is that a counterexample or not really?

MATT: That's an example I talked about a lot in the book because I want to understand sort of how people reacted to it within political science and within the social science community, in general, a sort of Trumps studies that developed. So on the one hand, we have the sort of single worst form of social science question; they're the ones that people advise you to avoid. In a research design situation, we're trying to explain a single outcome that has 50 or more variables that are related to it. And we're doing so without complete theory and with a whole bunch of truly stochastic factors and changing causal dynamics over time. So that's a really hard thing to do. Nonetheless, we do still make progress because it's still a question that people want to know. They want to know why the polls were off in particular states. We make slow progress in answering what they want to know. Is this indicative of some broad change in American public opinion or in the state of democracies around the world? We have more information about all of those, as well. So we do make progress, but it just doesn't always meet the expectations that people have about predicting things in advance.

SPENCER: I see. So it's a bit like weather forecasting. Weather forecasting is better today than it ever was. But still, if you want to know is it going to rain 27 days from now, you probably don't know.

MATT: Exactly. And I make an analogy in the book that others have made between traffic and weather, you know, they're both on the radio every 10 minutes, in some places, because they're both things people really want to know. One is a mostly non-social situation. And one is a mostly social situation. And actually, we're getting better at predicting both. And we're getting better at giving you real-time data on both. But that doesn't mean that we're able to achieve what people want from that information. So a lot of times people actually want the weather to change, or they want the traffic patterns to change and do so easily through a policy intervention that's easily adopted and that kind of thing. We still can't do it. But that's not the problem of understanding the world. That's a problem of trying to implement things that change people's experiences.

SPENCER: Are you familiar with Phil Tetlock's book Expert Political Judgment that he published in 2005?

MATT: I am very familiar, and it's cited a lot in the book.

SPENCER: Okay, yeah. So in that book, he basically finds that experts are really bad at forecasting the future around political issues, as I'm sure you're aware. Now, I don't know to what extent he's talking about the same sort of experts that you're talking about. Yeah, just so curious to hear your kind of thoughts on that. Obviously, that book is a little old being 15 years ago?

MATT: Well, obviously, the activities that he's involved in have kept up. So we have very recent evidence on how predictive experts are predicting social and political events. And I would say, the update isn't so much to the headline that it's very hard to predict world events, and that people who are saturated in one area of expertise aren't going to be the best people to make those predictions compared to people who have a broad knowledge of general social trends. I think those things are still true in his most recent evidence. But I would say that the current punch line isn't that they're unpredictable. It's just that in order to make good predictions, you have to teach people to adopt different thinking habits, you have to have them work in groups. And conveniently, I think social science is undergoing both of those processes over time. We're seeing more team science, we're seeing more interactions worldwide in interpreting social events. And we're seeing the adoption of more strategies to try to protect against these basic human biases and the more particular biases that researchers engage in when they start research.

SPENCER: So would you say that if we looked at academic papers in your field that make predictions about the future from 20 years ago, and we compare that to ones, you know, now, being publishing pictures of the ones today would actually be more accurate? Like they would be right, a higher percentage of time or a better Briar scores?

MATT: That would probably be true, but it's probably a low bar because we used to sort of make broader pronouncements than we do today. And I think that's true across a lot of these fields across economics and sociology and psychology.

SPENCER: So humbleness is part of the learning here.

MATT: Yeah, it is. It's to narrow the scope conditions of your claims to try to not make a sort of world-historical process into a few variables. Those are newer trends in social science that are productive.

SPENCER: And then what about different causality? Because, in addition to prediction, which is obviously of interest to people, the topic is probably even more of interest to people is understanding why something happened or what causes what.

MATT: It is. But I'll push back a little bit and say that a lot of the important trends are happening in just descriptive data and a lot of things that people are interested in, like polarization in my field and economic inequality. In economics, big research fields have developed basically on the back of pretty clear descriptive trends that we are able to identify, and then a lot of work is going into analyzing the determinants later after that descriptive improvement has continued. So I think that's a success, regardless of whether we get to the full causal story. And one thing I would say about the full causal story is that we're recognizing that causes change over time, as well as outcomes. They are different in different parts of the world. Sometimes they're different across different people. So I think the search for sort of one or two causes for a broad phenomenon is sometimes misplaced. But that said, there's a causal inference revolution that started in economics and is being adopted throughout political science. And it involves not just the increase in experimentation, which is very wide and important throughout the social sciences, but also the adoption of statistical techniques that allow for better causal inference in particular situations and to search for research designs and data that might take advantage of those techniques. So we're on the right way, whether it is looking at those descriptive trends and being better at measurement, or trying to figure out causal chains, as long as we restrict the scope of where we're able to say that we know what causes what.

SPENCER: So I think you're saying that first of all, prediction often comes before causal understanding. So you first want to build a predictive thing, and then you can start talking about okay, what is causing these changes, and so on? Is that first, correct?

MATT: Well, it's not only a prediction, but I would also say the description is important. So we do often want to know how widespread something is? How long has it lasted? Is it going up or down? And those are often the first steps in a research decision, even if we can't say, for example, that we know that economic inequality is going to increase in the future, just knowing that it has over a certain time period gets us a lot of interesting research questions that we then go and narrow down.

SPENCER: So maybe measurement description comes first, which might mean getting better data or figuring out better tools for measuring. And then you can get into prediction, saying, Okay, we've measured this thing, and now we can try to predict where it's going maybe in the short term, and then you can try to say, Okay, now let's dig into the causality to understand what's causing what, why is it changing, and so on.

MATT: I think that's true. But also, even if we start by thinking about causality, that might move us to better data collection in the future. So if we're interested in say, the effects of nationalization of media, we might have to go to a particular instance, when say, the networks were advanced in the 19th century versus the death of local newspapers or the nationalization of local television stations. So we might have to then go to very particular situations where we have some causal leverage and then get a better measurement of those things in those places where we might have causal leverage.

SPENCER: I see. And you're also pointing to these kinds of changes in methods used to infer causality. So for those that are not familiar with this topic, you have the classic experiment where you're going to randomize some variable. And then because it's randomized, that allows you to say not just that a change in variables associated with the changes something else, but the change variable causes something else. And then you also mentioned these kinds of other approaches, like imagine you're talking about things like instrumental variables, techniques, and regression discontinuity and that kind of thing.

MATT: Oh, absolutely. And just more widely thinking of places where something occurs in the world that looks more like an experimental situation and taking advantage of those.

SPENCER: Right. So like, for example, if earthquakes happen semi-randomly, you could treat an earthquake as sort of a semi-random shock. And then, if you wanted to study the effects of something that an earthquake causes, you could actually do that without being able to randomize that thing yourself because sort of the world is doing it for you.

MATT: Exactly. So there's an interesting study on the San Francisco earthquake in the early 20th century that is trying to look at sort of what happens to the local property markets. When you see that kind of shock, and you can still see the effects today in terms of what's built were based on the initial earthquakes, and then they went to see if the same would be true in the late 20th-century earthquake, as well. So yes, you can use those kinds of things, and some things that don't sound like natural experiments like earthquakes might still enable that. So you mentioned a regression discontinuity. Obviously, if there's a test that allows someone to get into a college and we compare the people right below and right over that cut-off, then we might have a better estimate than if we just compared all the people who went and did not go.

SPENCER: Right, because there might be many, many factors that determine whether someone went to that college or not, people who differ in all kinds of ways like their history, their background, their wealth, and so on. But if you look at people who fall just below the testing cutoff, and those just above it, because their scores are so close together, they are very unlikely to differ in any particular way other than just sort of the randomness or whether they were just above or just below the threshold. Therefore, the people just above the threshold are essentially like a control group for the people just above, except they didn't go to that school. And so we can assess the effects of going to that school is that they do?

MATT: We can, but there are still trade-offs. Just like in an experiment, we have to trade-off that we might not be studying exactly the same thing that's of theoretical interest. Here, we might not be interested in just the effects of attending that one college at that one time period. And so, we do have to still investigate how widely these patterns are confirmed. Nonetheless, there's a huge gain from being able to make that causal inference even in that specific scenario.

SPENCER: So we go meta on this for a second. Suppose you pick up a recently published paper, and let's say one of the top 10 journals in political science or economics or your official psychology, you read the paper, and let's say they have some studies and they make a claim based on the study is that you know, X causes Y. It's not a claim you'd already heard; it's not a claim. You already believed it. How much do you update your opinion on that? How convinced are you by reading one paper that makes that claim?

MATT: First of all, I think that one paper would be better than its equivalents 10 or 15 years ago, by a pretty considerable margin. But I don't know that you should update your beliefs if they were already confirmed by a long series of papers of various types. So I still think science is a social enterprise that comes to reflect reality as people engage with evidence that was collected by other people. But one of the benefits is that although we're not great as humans in understanding our own biases and how they interact with our research process, we're very good at identifying other people's. And so that means that through this social process, we can say, "Okay, well, you saw this claim in this paper, you believe it, but I start from a different perspective." And so, I might identify different problems with the claims made in the paper. And if we both talk about it together, then combined with what the authors have come up with, we might more closely approximate the truth.

SPENCER: Something really depressing that I've observed is you look at these scientific, self-improvement type books that are based on social science that was published, let's say 10 years ago, 15 years ago, and you read them today, you find them just littered with references to studies that didn't replicate, like a whole bunch of these classic ones now that are in a ton of these books just are now in dispute. So do you think that that was a kind of problem of that era and that we're kind of moving past that now, I was recently talking to an author who published a book, and she was saying that she was really struggling to know what to do because she felt like if she said it a whole bunch of social science relevant to her book, she feared that in 10 years, a whole bunch of it would be found out to replicate. But if she didn't cite it, she worried that people would call her book unscientific. And she felt like she was stuck in a rock and a hard place.

MATT: It's certainly true that we're discovering problems in more past studies. And part of that is through the replication revolution that more studies are being tried, again, in both the same and different circumstances. I would just point out that it is still progress, right? So I'm learning some things that we thought were true before, but don't turn out to be true or don't turn out to be as true as to the extent that they were initially published, is still a learning process and one that we are going through regularly. That said, the problem is not solved as part of the human condition that we want to make a claim, have evidence, cite the strongest evidence available, and move on to the next claim and cite the strongest evidence we have without full engagement. And it is a problem that is greater in the journey from scholarship to popularization. So the more that you're trying to make a bigger argument and talk to a broader set of audiences, the more likely you're going to be able to sort of skip some steps in the verification or comparison of evidence that you might have if you were in a particular scholarly area. So we're stuck with that. That's normal. But I think the processes are improving even there, even in the popularization angle, that there are more people who are willing to push back. So I talked, for example, about the debate surrounding Steven Pinker's books, especially the evidence that violence is declining over time and I really think it's it's great to have this public debate about these kinds of big claims and a lot of it has held up, but of course, there's a lot of pushback too, and that helps us refine the truth of the claims and to sort of narrow them to say, well, interstate wars are in decline since 1945. And that is a big part of this global trend. Some of the evidence gets worse as we go back in time. That doesn't mean there's no evidence that there was widespread human killing across societies that there is, but there are disputes in certain quarters of it. So I think through that process of popularization, people responding to the widest popular claim with different kinds of evidence, and the scientific community as a whole, reevaluating those broad claims, and their specific application has all benefited us.

SPENCER: So I kind of want to dig into where we actually disagree because I feel like there may be something fundamental we disagree on, but I'm not exactly sure what it is. So if you look at it, I tried to calculate what were the median sample sizes used in different studies at different points in time. And I found some papers that kind of try to make estimates on this. If you look back at 1977, the number I found was about 17, study participants per group.

MATT: Is this specific to psychology?

SPENCER: Yeah. So this is more in psychology, which can vary that I know best. Yeah, thanks for clarifying that. Go to 1995. So you know, almost 20 years later, it was up to about 19 study participants per group 2006 year up to about 21 participants per group. Now, I do think that there's been some improvement since 2006. Like I do think sample sizes are a bit bigger. But you know, 21 per group is just appallingly low. And if you kind of do the power calculation on that, you discover that there's just a huge probability that they wouldn't even find the fact they're looking for if the effect was real. So I mean, I think those power calculations typically find less than 60% statistical power. So, in other words, if your effect is real, there's obviously a 40% chance you don't even find it. Now, today, I do think it's a bit better. But I still think that sample sizes are just way too small in a lot of research. And I'm wondering whether you disagree with that, or you just think, Okay, well, you know, yes, or too small now, still, but at least they're better than they were in 2006?

MATT: Well, I think you would actually see fairly dramatic improvements in some areas of psychology since then. And I think psychology was an outlier. And its reliance on very small scale studies to make very broad claims, and also has done some of the most to try to improve and address that situation by replicating studies across contexts with larger samples. But some other social sciences have always had somewhat larger samples but are also discovering that the power that they think is there might not really be there, given the level of change that we see in the real world, or might have a few cases that are really driving results, even if it looks like there's a very large sample size. So I think psychology is improving on that dimension. But we're also discovering that it's sort of part of a broader set of issues and some improvements we still have to evaluate. So for example, it is much easier to get a large internet survey sample than it used to be. And so people will do a lot of experiments on a very large sample size by the standards that you just mentioned. But it's a different population; it's not necessarily a worse one than say, the undergraduates are willing to get extra credit and have some of the studies of the past. But it is one with potentially different biases that we might want to take into consideration in evaluating those studies. So sample sizes are going out. But I would say a fairly small part of solving the traditional problems of social science.

SPENCER: If I really think about why do I find it so disturbing that the sample sizes are so small? I think it suggests to me that there's some kind of deeper structural issue because you're scientists. You want to figure out what's true about the world. Why would you ever use such a small sample size that even if your hypothesis is correct, you have a 40% chance or higher of not being able to find it? To me, it's just bizarre to me that persisted. And so I found this paper; I had this great title, the Statistical Power of Psychological Research: What Have We Gained in 20 Years, and it's basically about how for 20 years there was no improvement in physical power. And this paper was written 30 years ago. So 30 years ago, there was a paper saying that for 20 years, there's been no against statistical power, even though this had been repeatedly pointed out again, and again, statistical power was way too low. So to me, that suggests something is really weirdly broken. And maybe that thing is now getting better, whatever that was, but I just find that really disturbing. So I'm curious if you have a comment on that. How do we get such a system where for decades, and decades, and decades, we're using sample sizes, they're so low, we're not even finding our real hypotheses?

MATT: Well, part of it is that people weren't aware of the meta-science research findings and the way that they are now aware and in the way that there's a community of people who are trying to bring findings like that to broad audiences and show that they really are critical to the success of scientific fields. Not completely unique to social sciences, though they find the same kinds of things and in other areas of the sciences, as well. So I think there's collective learning going on about the importance of these basic issues. It is being reflected in current scholarship.

SPENCER: Today, when I look at a paper in a top journal, I assume there's about a 40% chance it won't replicate. In other words, if someone faithfully tries to do just what that study says they will not get the same result as their original study. Do you think that I'm being overly pessimistic?

MATT: I guess I would say it depends on what journal you're reading.

SPENCER: It's a top journal. So say, you know, top 10 journals in a given field?

MATT: Yeah, I think it depends on the field. I think, in political science. That's not true. But it's not true, because some of the kinds of analyses that we're doing are on publicly available datasets with replicated code that gets replicated before the publication, makes it into print, same in economics, where there are proliferating appendices of doing it 100 different ways to show that it doesn't matter that much. So I would be pretty confident that you're going to get the same results. But to me, that's not much of a test, right? It's just I downloaded the same data and got the written the same code. And it turned out the same way. I wouldn't necessarily say that success compared to say, I ran a psychology experiment with 100 people, and I got the same results that you did running it on a different 100 people.

SPENCER: Right, I mean, more cases where you have to go collect the data, again, for yourself from scratch. Yeah.

MATT: It depends on what the data collection process looks like at the time. But I would say all of these are, your percentage, that it's likely to be replicated should be increasing over time in all fields, but might have been pretty low in some fields, to start. The other thing is that I am sort of less confident than some of the academic reformers that this kind of study where we get the same result on a different population of people is as central to understanding the social world as some people seem to think. I think, there are often cases where you should expect results to be different, with a slightly different population, with a slightly different time period, with slightly different changes to the stimulus or interpretation of the stimulus. So I'm more open to us learning from the failed replications. And not always assuming that it was a failure in the initial study.

SPENCER: I agree with you that there are times when certainly when you expect that to happen, right? Like if you run a study on people 1980s And you run it again on people today, there are all kinds of reasons why even if you try to be totally faithful, you might get different results, right? Like culture has changed, you can't get exactly the same population. Or if you try to design cross-culturally, certainly, that's true, people might interpret the meaning of things differently, and so on. So you know, failure to replicate doesn't mean that there was necessarily incompetence or anything like that. But if someone does a study today, you know, in 2021, and pick it up the population, and instead of someone else who's competent, tries to replicate it on a similar population using the same methodology, and they don't get the results. It doesn't necessarily in the original team screwed up or was incompetent. But it certainly makes me highly suspicious that I'm gonna build generalize, right? Because whatever that subtle change was, either it was a false positive, right? It was just a statistical nominee, and they'd be hacked or whatever. Or there's some incredibly subtle thing that can actually change the results of that study, which we most likely don't understand. You know, in some cases, maybe we do it most often. I think we don't. And so it still shows me that the result is brittle and very hard to generalize from.

MATT: And I think that the second level of skepticism should have been high all along. And it's good that it's increasing. So I think that part is correct. But it's a truth about the world. It's not a problem of social science.

SPENCER: Another kind of element of this is that you say that such science is more relevant today than it used to be. What do you mean by relevant? Can you unpack that for us?

MATT: Well, sometimes there is a hypothesized trade-off between rigor and relevance. That is there are people who say, Well, you can do all of this fancy data work, but it's not really answering these broad questions that society has tasked us with answering or that we are really interested in answering or that theory guides us to answer. And I think while there is some reason to believe that there might be some general trade-off between the questions that we want to ask and the questions that are easy to answer, there is an effort to do both in today's social science, there's no shortage of studies on any given topic in the public domain. So if we pick a topic in today's newspaper, and we say what research evidence seeks to inform this topic, we will find more than we ever have. And we will find intermediaries who are seeking to summarize the evidence and apply it to that contemporary topic. And so the whole path moving both ways between understanding the research on an issue and conducting research on an issue and applying it to a real-world broad question at that both of those paths are moving better than ever.

SPENCER: What do you think is driving increasing relevance?

MATT: I think there is something of flattening attempts to understand the world between, say, the media think tanks, the academic world, the political world. And so there's more translation and more interaction across those fields. And it's easier for scholars to recognize evidence, find evidence across fields than it has been in the past. But there is also a change in attitudes in the sciences. There used to be a posture in social science that it was okay, that we were in our own journals only reading the journals in our field and our even narrow subfield, and only talking to a small number of people. And that knowledge would accumulate that way. And we didn't really need to go through the translation until we were finished with the project. And I think that attitude has long gone across the social sciences. There's much more effort to respond to real-world public conversations and to bring the best evidence that we can to inform those discussions.

SPENCER: Yeah, it's interesting because I feel like I often see papers that are about important real-world topics, in other words, that they kind of touched on tangentially, although I think it's much rarer to see ones that feel actionable like they're actually gonna help us solve that problem. Do you think that the actionability has changed over time? Do you think there's more emphasis on making the results useful? Or do you think that hasn't changed as much?

MATT: Well, I think it's always been hard to act on research, even if we want to, and to be able to assume that our action, let's say even, we have very good evidence that a particular educational intervention worked in a particular case, our ability to actually know that we could implement that on a broad scale has always been low. We're increasingly recognizing that it is but it remains difficult. And there is a bias, I would say that does come into social science from the effort to be actionable, right? If I want to develop an educational intervention that is going to increase test scores in elementary school, the fact that I'm driven to have that applicability is going to buy us the kind of research that I am able to conduct and how I'll interpret it. But again, that's always been true. We're just recognizing that those biases exist and trying to incorporate other types of evidence and other types of people in the decision-making chain surrounding how to apply that evidence. So I think it is hard, but it is getting better.

SPENCER: Just to mention that you think social science has become more self-reflective. What do you mean by that?

MATT: Well, I do incorporate in the book large-scale surveys of political science, economics, sociology, psychology, and anthropology. I do have evidence that social scientists are thinking about all of these issues that we've discussed, but also that they perceive sort of declining disagreements in the field that are not small disagreements, but these sort of large scale disagreements that say, you know, we should only do qualitative or quantitative work, or all work should aim toward causal inference or no workshed. Those kinds of divisions are in decline throughout the social sciences. Fairly, I also had a lot of open-ended questions. So I was able to see the texture of what people were thinking about. And all these trends that we've been discussing are widely recognized across these fields and discussed now not everyone agrees. So, for example, one of the more controversial elements is pre-registration. And that is the idea that before I conduct a study, I submit a plan that says what kind of analyses and data collection, sometimes I'm going to undertake in that study. And then I at least present what I said I was gonna present, and then I can add on things in addition to that if I say that I've added them on. So that's a very controversial intervention in social science, even though it's a rising one. But I would say that the people on both sides of that debate are actually a lot more reflective than their opponents sometimes give them credit for that is the people who say they need to aim toward free registration are very reflective about the problems that come up in that process about in their own research, how they do not insist, did not anticipate certain decisions and had to make adjustments along the way and recognize them about how difficult for younger researchers to put themselves in these boxes. And similarly, the people on the other side of that debate are also quite reflective about what is gained through that process. But also that by saying that we can't do that for most research, we are saying that most research is exploratory and it's not designed for confirmation of the theory. So that's just one example of where I think that these divides are growing more narrow and more reflective on each side than we give credit for.

[promo]

SPENCER: So if we step back, and we think about this topic of social science making progress like to make progress, it has to accumulate, right? Essentially, we have to learn things, and then we have to agree on at least some of those things, then we have to build on what we learned. And I think some people, I've talked to quite a number of people who basically think that social science is doomed. Like,basically, we just can't build on it. That's not accumulative science that we haven't really made substantial progress. It's like trying to build on a foundation stand. I know that you're much more optimistic than that you think it's very difficult to kind of accumulate, but that we are able to do it. And so science is probably the best way to do it. So yeah, I'd love to hear your thoughts on that.

MATT: Well, we've just gone through an interesting example in the COVID pandemic, where we have an issue that is new to science, generally that we have some basic understandings of from previous elements. And there were certainly people who said at the beginning, only let the virologists and the public health people in the epidemiologists talk in these discussions because they're the experts. And there was a lot of social scientists who said, well, actually, there's a lot of social determinants of health, there are a lot of social factors that are going to impact the effects of the policies that might be recommended by the medical and hard science experts, there are going to be political factors that affect our ability to implement those policies across different localities. And there are factors in human behavior that we need to understand that is going to impact our success. And a year and a half later, I would say that the social factors have been quite important. And that we were able to understand them through the tools of social science. So in my own world, partisanship was a major factor both in the implementation of policies but also in the actual behavior of human beings. We are seeing it still in vaccine takeup. So the kinds of predictions that were made by the hard sciences were also often a lot of cases, not because in my opinion that they failed miserably, but because it was a very difficult situation to understand. The same is true of some social predictions. But I don't think there was any sort of hard-line differentiating what we can learn and what we can't about an extremely important social problem that we came into without a whole lot of previous information.

SPENCER: Do you think social scientists help significantly with the pandemic like using kinds of theories and data, they were able to make things go better than they would have?

MATT: I would say that, yes, they did. But not in some sort of grand success. I mean, we should say, and this should defend the epidemiologist as well, that just because research findings are made and brought to the public's and policymaker's attention does not mean that they are implemented. And that is also true of social science recommendations. But yes, I think, for example, the early learning about what kind of lockdown policies were most effective versus had the most economic implications involved economics as much as it did public health, I think some of the learning about what kinds of things were possible in a decentralized political system that had state and local action versus others was influential in government decision making. And even in vaccine outreach, some of the initial social science that said that minority communities would be hesitant for the vaccine was effective in redesigning policies to target them. But then some people forgot that partisanship and urban-rural differences would also be huge factors in vaccine makeup. And so, there are now very specific efforts to address those remaining holdouts.

SPENCER: That's really interesting. And I've said I wasn't reading a lot of academic papers from the social sciences on COVID, specifically. So it might be that there's we're pointing in different directions and then the stuff I'm about to mention, but at least in the public press, where you were seeing social scientists' comments on COVID, I actually felt that a lot of contributions were anti-helpful. And I'm not gonna say all of them, but early on the pandemic, I literally saw articles from social scientists say, here are all the cognitive biases, why everyone's so afraid of COVID they should really be more afraid of the flu. I mean, that was the narrative I was seeing early on, then I saw stuff in the public press again, quoting social scientists, saying, "Oh, yeah, you know, if you wear a mask, you're gonna become complacent. And you're likely to end up getting sicker." You're gonna basically sort of anti-masking. And, of course, that attitude eventually flipped later the other way around. In the UK, I saw social scientists saying, oh, we should do a cocooning strategy where young people just go about their business and old people get cocooned. And then they totally slipped that after a little while. And so I just felt that if you can, at least in the public domain, where so scientists were commenting as if, I actually felt it was muddying the conversation, confusing people. And if anything, like, on average, pointing the wrong direction, and I'm just curious, your reaction to that.

MATT: Well, of course, there were lots of people saying things that turned out not to be helpful early in the pandemic, social scientists among them. We might have to design a study to try to figure out just what kinds of pronouncements were made and how many turned out to be helpful and not. But I would distinguish a little bit between, say, social scientists being covered in the media, making an application of very general theories of, say, psychology to the current pandemic, versus social scientists who had done initial research with things like survey data or geographic data, trying to learn lessons from that, they then applied to public policy decisions. I think you'd find a lot more success on the ladder.

SPENCER: Yeah, I can't believe that. Yeah, it's interesting. Honestly, if anything, the pandemic made me think significantly worse of science as a whole made me think worse of epidemiology. It made me think worse of bioethics and me think worse of social science because I just felt like a lot of contributions were less helpful than I sort of anticipated they would be.

MATT: Well, one thing I would say is, and this won't necessarily be helpful, from a societal point of view, for immediately responding to the pandemic, but social science accumulation and scientific accumulation generally are very slow processes. And we ask them to be very fast in this situation, and some others, and sometimes that's for a good reason. But I think I have a lot more confidence in what we will learn about the pandemic, say over the next five or so years, than I do and what will be immediately implementable and successful.

SPENCER: Right. So hopefully, for the next pandemic, we'll be more prepared, which would be a really good thing. Now, you used to talk about different sorts of diverse voices in social science and how that can be a form of progress. Do you want to make some comments about that?

MATT: Yes, so social sciences are getting more diverse along racial, gender, and international lines, that is also sciences are becoming more globalized. And within societies are incorporating more women and racial minorities, some slower than others. All of those trends, in my view, are visible in research, in the topics covered by the research, in the diversity of interpretations offered for the evidence that we have, and the questions that are asked, and in uncovering real conclusions from past research that turned out not to be the case because of the biases of the research community. But we have also become less diverse in one important way. And that is, and we never were particularly diverse in this way, which is in politics. The social sciences today are overwhelmingly composed of people on the left side of the political spectrum, both than partisanship and ideology. That is true across countries it is to the extent that it is changing. It is getting more in the direction of less political coverage. And I am one that thinks that this does matter. And it matters largely for the same reasons that the other things mattered that I was talking about. Because we have historically been overwhelmingly focused in the United States and less in Europe. Social science has assumed that a lot of those findings might apply elsewhere, and they don't necessarily do so. And social science trying to analyze, say, the beliefs or behaviors of conservatives from a perspective of a community that is almost uniformly liberal is going to be a hard undertaking. The intervention that I try to make is to say that where often you see these two initiatives kind of juxtaposed where it's conservatives who are sort of saying that the move toward identity concerns in the social sciences are a sign of a big problem. I actually think that we can learn from the successes of identity movements within the social sciences and expects that, therefore, it is going to be important that conservatives not only continue to be a part of science, but also that they comment on science engage in the process, and that we have the same back and forth between people who are discussed in scientific articles as the subject and the types of people doing the researchers. So this is a place where I see a lot of lack of success. I guess I would say that we have a long way to go. But I still see the trends as positive. I think actually, more researchers are recognizing this bias, and especially in the areas where you would expect it to be most influential areas like what are the differences between liberals and conservatives, where our evidence is a lot better than it used to be.

SPENCER: You know, what direction the trend is going? Or is social science becoming more liberal or less liberal over time?

MATT: To the extent we have evidence is becoming more liberal over time.

SPENCER: I see. But you're still optimistic that we're going to be reducing this bias?

MATT: I am less optimistic in this area than in these other areas. But I am optimistic that the ways that it has worked, that it has been successful in other areas can work in this area, as well. And I do think the recognition, to the extent that it is increasing, is making a difference in the kinds of studies that we conduct and their interpretation.

SPENCER: So what do you feel like is lost by having a lack of diversity and kind of political perspective?

MATT: I think we asked different questions than we otherwise would, I think we tend to start from more negative views of our political opponents than they would. And I think that our interpretations of the evidence are often the least generous to people on the other political side. So just as one example, we have a scale in political science called Racial Resentment, and even the name of the scale sort of connotes that unless you're at zero on Racial Resentment, then then you know, you have engaged in racism or it's a very negative valence kind of term, even though the questions that are in the scale, have a range of perspectives. And there are lots of people who might be in the middle of the scale, who we might not necessarily recognize as resentful in any kind of common sense. So I think those kinds of things are common within social science but way more common in the areas where you would expect it where it most directly impinges on conservative values and beliefs. One quick thing I want to say about this, though, is that this is not a unique problem. This is very common in social science, we have, of course, the problem that you started with, at the very beginning, and you said was an endemic part of social science, which I agree is that we are setting ourselves, and we're not setting any entity, we're not having any non-human entities study the human condition. So it is very normal for us to be in this kind of situation where we can't necessarily just rely on our diversity to move us forward. We have to rely on the reflections and saying, What about the fact that we are liberals is affecting our investigations, and that part of the process is going well.

SPENCER: One way I've seen this effect bordering on questions is, it sometimes feels like there's packing into questions, certain assumptions about the way the world is, unlike for example, I've seen scales trying to measure racism, that assume that, for example, if someone's against immigration, then that there's kind of racist underpinnings of that. And of course, it could be true, one reason to be against immigration is that someone could be racist, but you can imagine someone might be posted immigration for other reasons that are non-racist. And this thing's being conflated can kind of lead to misinterpretation. So I definitely agree with what you're saying that sometimes these more political issues, having the diversity of thought could be helpful.

MATT: Yeah, I think in that particular example, which I happen to be pretty, pretty familiar with. I actually think the difference in the sort of public discussion and the actual academic research around people who saw the immigration policy attitudes is pretty vast. And if you looked at that discussion, you would actually find pretty nuanced visions of how values, general political perspectives, views of immigrants, racial attitudes, and differences across the country all come together in immigration attitudes. And I think things are actually progressing, even there in that very politicized issue. But just one analogy I want to make just real quick is that this, again, is not that unique of an issue, right? We are trying to understand, say what happened in the 70s versus today without people who are living in the 70s doing the investigation or more difficult, trying to figure trying to compare things in the 15th or 16th century with today with only the perspective of people who are living today. So it is a fairly common problem in the social sciences, where we can't necessarily solve it just with more diversity of researchers.

SPENCER: Yeah, that's a good point.

[promo]

SPENCER: So another topic that I think is quite interesting is the extent to which social science should inform policy, but also the extent to which it should actually impact our day-to-day decision-making. What are your thoughts on that?

MATT: This is kind of related to the point you made about the books with the psychology studies that didn't replicate. There's a long-standing desire that is natural, I think, to human life to want to engage in self-improvement. And that has historically been the home of some of the wackiest adaptations of both social and natural sciences. And it is true that get-rich-quick schemes don't usually work, that health fads don't usually work, that basic changes that can be implemented in three or four steps by everyone are unlikely to produce major changes in social or economic outcomes. It has always been true. I don't want to argue that public understanding of that problem is increasing. I just want to argue that the actual research that undergirds those kinds of recommendations is improving and that there is more pushback between the sort of popularization of those research results and the researchers across the scholarly community that is likely hopefully, in the long run, to actually make it to the point of people who are trying to engage in our own self-improvement.

SPENCER: Do you think that we should be able to learn how to improve our own lives from social science?

MATT: I think that we should expect it to be difficult for individuals to interpret research evidence from a community to interpret their own lives. But I absolutely think it is possible for people to improve their lives. And even these areas, like we've been talking about cognitive behavioral therapy is a good example where these come from the tenants of psychology from repeated research results that were translated into an applied form that has been there from the beginning, that is trying to make a difference in people's lives and has actually been quite successful.

SPENCER: Yeah, I think cognitive therapy is a really interesting example. Because it does seem to have a lot of really practical, useful techniques that benefit a lot of people, yeah, I think it arose originally through a combination of sort of clinical side people working with patients, and then the research side and this kind of coming together.

MATT: And I think that's always true. Honestly, even though you can't, you can sometimes not think of the clinical equivalent in some in some cases, I think it's actually quite true, even in a field like economics, where a lot of the innovations are not going to come just from the economics profession itself, but from the engagement of the economics profession, from actual economic actors.

SPENCER: And now let's switch to kind of our final topic was just talking about the university system. Some people tend to think that there's sort of a big crisis going on with US universities or with science as a whole. So I'm wondering, what are your thoughts on that?

MATT: So, I think the perception of universities in crisis is one where the public and the scholarly community actually have some sort of consensus. That is, the scholars are also disappointed that their graduate students can't get jobs, that the university bureaucracy has somehow undermined a scientific enterprise, that there are some negatives from the engagement of the university in the commercial world or in the governing world. So you actually see a lot of similar complaints inside and outside social science related to these issues. But my evidence that I'm able to accumulate is just basically saying each of these social science disciplines is quite large and quite stable. And a lot of the basic trends that people are worried about in the social sciences arise from some pretty normal dynamics of universities; for example, the big one is the move toward applied fields. So it used to be the case that a lot of people in the core social sciences like economics and sociology would get jobs in business schools, or education schools are public health schools, and now all of those places produce their own Ph.Ds., as well. And that means the mark of social science has actually grown over time, not been limited. But it does also mean that for the internal labor markets of those fields, you're going to see some problems, even though the number of economists and sociologists is fairly constant or increasing over time. So I don't want to defend the university in total. I'm just trying to argue that the kinds of problems that people think come from the university the fact that social science has to take place in the university, or takes place largely in the university or not inhibiting the development of social science research.

SPENCER: Some people argue that there are too many Ph.D. students being produced. And this leads to an overabundance of people that want to be in academia; only a small percentage of them can stay. So people end up, spending, you know, six years or five years or seven years doing their Ph.D., only to a kind of essentially being booted from the field or to end up in a series of really crappy adjunct roles, or maybe postdocs in the middle of nowhere that they were in a place they really don't want to be. So I'm wondering what you think about that critique.

MATT: So that the basic trends you identified are correct. But I'm trying to identify sort of how they fit into the bigger picture. So certainly, educational attainment is increasing in the United States across all levels, including at the Ph.D. level. And that means that people are often placed in non-academic positions if they go through programs, which are designed mainly to produce academics. And so it does produce that kind of situation that you were just talking about. I just don't see it as a negative. I think that, again, there is still the same number of people engaged in social science within universities. And then in addition to that, there are more people with training in social sciences who have careers in industry or government. And those trends are likely, perhaps, to upset prospective grad students or current graduate students, but don't necessarily mean that we should expect negative trends for the production of social science research or its use in the real world.

SPENCER: Well, if anything, it makes it more competitive. So you might even have more competition in terms of finding the best people to continue in the field. But from the point of view of Ph.D. students, or people considering going into it, it does seem like it's increasingly grim. I don't know what your experience is tied to Ph.D. students, but I've often found them to really be depressed and feel like they're in a bad situation. I don't have any actual data on that. But that's just my anecdotal experience. I'm often surprised by how visible many of their views are of the situation they have.

MATT: There is data that shows the increasing number of people, at least, stating that they have mental health conditions. And there also is some evidence of increased use of mental health services among graduate students. So both of those trends are real, that you've noticed, and I do not mean to undermine the sort of effect on the humans involved with the situation, it is very disappointing to go into a field, have training primarily designed to get you ready to take a very specific type of position in that field, where there aren't many slots available. So all of that is true. It just is not undermining the success or accumulation of social science research. If anything, it is expanding the number of people with training and social sciences who are able to interpret social science evidence and apply it in the real world.

SPENCER: I feel like a major difference in the way you and I think about this is that even when you see negatives, you say, Oh, well, they're improving, and you see sort of silver linings, and good things coming out of it. Whereas I think when I see the negatives, I say, well, these are not as good as they could be. And it kind of leaves me in a more sour mood towards some of these things. So I'm wondering how much of our disagreement is just comes down to personality or something like that.

MATT: I do think there are differences in optimism and pessimism that are due to natural factors rather than nurture factors or interpretation of evidence. So I'm certainly willing to believe that. But I also think maybe just that output that I have in mind, is maybe different. So obviously, if you want to look at all of the costs of the current system in universities, you might come to a very different view than my kind of more narrow lens, which is just are these affecting the production and accumulation of social science knowledge.

SPENCER: So if you were to take the perspective for a moment of people who think social sciences are in a really bad state, I know many people like that, who basically have started writing off or science saying they like don't trust papers anymore. They don't think the science is cumulative. What do you think the sort of steel man argument of their view is? I mean, I know that you don't agree with them. I know you think that things are getting better than we might, we're probably in the best place we've ever been, that's perspective come from, but what do you think the strongest cases for their view that things are really packed?

MATT: Well, I do. Although I disagree, I do want to say I still value their perspective. In fact, a large part of the book is about how all of these trends and social sciences were responsive to the major critiques. So everything from the lack of diversity and social sciences to complaints about the rigor of causal inference is all of those things came from critics, not optimists like me. And so, advancing those perspectives is actually quite important in the development of social science. So I think the strongest view would just be that these problems in the social science and or enterprise are, I guess, even more, innate than I've suggested, and that means that although we might be able to adopt a technique here and there, we're still going to be faced with the same basic dynamics, which make it near impossible to identify generalities about the social world that can be put to any reasonable use.

SPENCER: Yeah, and I think a core critique that people not so they should have is about incentives that, you know, sure people can take steps to do things, like, try to use better practices and agree on what it means to do good science and so on. But if fundamentally, a large part of the incentive is driven towards just publishing as many papers and top journals as you can, otherwise you get squeezed out of the field, a lot of that stuff that will end up being cosmetic, because, at the end of the day, you have to publish those papers somehow. And I guess one way I think about it is, it's very hard to come up with something important and novel and true, right? That's a really tall order. But if you need seven publications in top journals to get a job in the field, and almost nobody except like, the greatest, you know, super-geniuses can come up with seven important novels, true ideas, then something's got to give, like, there's gonna be pressure towards doing shoddy research and these attempts to reform, they can create better practices in certain visible ways. But then something else underneath is has got to give, because people still need those seven publications.

MATT: So that those particular true but have stimulated broad efforts to counteract those incentives, including major institutional changes that have been adopted by whole categories of journals, and disciplines, including the sort of cultural and more social dynamics of the situation in terms of how papers are viewed in promotion decisions and hiring decisions. So I think both the institutions and the culture are changing. But I guess I would say that it is a common attitude, not just in social science, but in any other area of public policy, where people will say that a situation has something to do with the fundamental incentives that the actors cannot be easily improved. And I agree with that easily improved part. But that doesn't mean they can't be improved at all. And I actually see quite a bit of improvement and lots of other areas as well, where you might think that people have incentives to avoid progress. But if you can change institutions, and make people aware of new information, then you will potentially see changes to improve behavior.

SPENCER: Now, while I'm definitely much more pessimistic than you, on average, I do see rays of light and, you know, some of the things you're pointing out I do think are really important and real improvements. You know, as you mentioned, now, in political science, there are some journals that are kind of doing pre-publication, data checking, which I think is really good. There are more people using pre-registration, even though I think it's still a really small percentage of papers that are pre-registered, there are more people publishing their datasets, which is awesome. And I think that just that leads to better science.

MATT: It's not just the practices. It's learning from those practices, right. So there are people who tried to do pre-registration and realized in the process that they were doing more exploratory research than they thought that they were doing. There were people who tried to check a dataset and realize that person who said this has only been true in the last 10 years was correct. It's only been true in the last 10 years; my data does support constant relationships. So it's not just the changes in practices. It's also the learning process from responding to those critiques.

SPENCER: Right. So those critiques lead to changes in practices, and those changing practices lead to a better understanding of the discipline, like, well, okay, what is exploratory research versus a confirmatory? And are we making that distinction properly? And yeah, I mean, it's so I do think there are a lot of these good things on the horizon, I just worry that cosmetic changes on the outside, if there's a deep structural incentive a problem, don't necessarily solve it, because they just solved the part you can see and then they don't solve all the hidden stuff, which maybe is a lot of action. So yeah, I'm cautiously optimistic.

MATT: But I think I see both the like we're talking about that both of self-reflection and the institutions are changing. So I don't see as much of a trade-off between we've implemented this institution, and thus, we aren't as reflective about the biases that created it. I think both are happening simultaneously and are likely to improve research.

SPENCER: Matt, thanks so much for coming on.

MATT: Thank you.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: