March 2, 2023
How are curiosity and innovation connected? What's the most important problem in your field? And are you working on it? Why or why not? Is curiosity the best heuristic — either for an individual or for society at large — for finding valuable problems to work on? What mental models do people tend to use by default? How much is an academic degree worth these days? What are some alternatives to degrees that could count as valid credentials, i.e., as unfakeable (or very-hard-to-fake) signals of someone's level of skill in an area? Can people learn to fake any kind of signal, or are there some that are inherently unfakeable?
Rohit Krishnan is an essayist at Strange Loop Canon, where he writes about business, tech, and economics. He's been an entrepreneur and an investor and is very excited to see when crazy ideas meet the real world. Follow him on Twitter at @krishnanrohit.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Rohit Krishnan about curiosity and problem-solving, shaping mental models, and credentialing.
SPENCER: Rohit, welcome.
ROHIT: Thank you for having me.
SPENCER: I don't even know how to introduce this conversation. I think this is gonna cover many different topics, from innovation to worldviews to credentialing. So let's jump into the first topic here. Tell me about your views on curiosity, and how you think that ties in with innovation.
ROHIT: Curiosity and innovation is something that I had thought about a little bit throughout my career and throughout life in general. But I think, over the last few years, it's kind of come back as this really potent force and something that becomes part of my advice whenever I speak to anybody, regarding career, life, whatever. The shortest version of this is that, if you want to get innovative about anything that you do in life, or if you want to create something new in the world, there's a couple of schools of thought on what are the best ways to go about doing that. One is to try and map things out — write out all of your strengths and weaknesses, what are the world's needs — to try to backsolve into a logical way of going out and trying to solve for things that actually only you can do. And more recently, I've been thinking that curiosity, ultimately, is probably the best yardstick of how we as a species, and us individually, are able to figure out where to put our resources. So a couple of ways that I think about it. One is Joel Lehman and Ken Stanley's book, “Why Greatness Cannot Be Planned.” It is a book that I really like because it ultimately tells quite a few stories about the futility of having primarily objective-driven management. In many ways, they're talking about how curiosity needs to be unleashed. If you look at the world of technology startups, etc., most of them are highly curiosity-driven. The projects that start are rarely how they look when they end or when they grow. Science, in many ways, is extraordinarily curiosity-driven in the sense that you want your scientists (ideally) to go after the things that they are most curious about, and where they think they will be able to make the biggest impact. So the reason I started thinking about this a little bit more is that, if you're in a fairly quantitative field — whether you're in the world of business, most areas of science, tech, etc. — we're taught to be highly quantitative and highly analytical about answering the questions that come in front of us. And sometimes what that led us to is weird areas where, whether we want to or not, we don't actually ask quite a lot of questions to ourselves about what are the things that we are most curious about. I guess the classic anecdote is the Bell Labs anecdote where the gentleman went around asking people, “What are the most interesting problems in your field?” And whenever the answer came, then they were asked, “Are you working on that or are you working on something else?” And almost always, the answer was, “Something else.” And that leads us to... it's one way of saying, “If you were curious about the problem that is in front of you, why are you actually not working on that?”
SPENCER: Yeah, I think you're referring to the Hamming questions.
ROHIT: Hamming questions, exactly.
SPENCER: Yeah. I have mixed feelings about curiosity. On the one hand, it feels like such an important driver for understanding the world, like you actually want to know how things work and so you go seek out answers. On the other hand, it sometimes seems like it drives people in weird directions where they're really curious about (I don't know) beetles. And then they go spend years learning little minute details about beetles and you're like, “Well, was that actually productive?” Not to say that there couldn't be value to studying beetles; there could, but the fact that they were curious about it, and it drove their decision to investigate it, is that necessarily a good thing? So I feel the tension there where I think it's very valuable, but can also lead people in weird directions.
ROHIT: You're right. The primary answer would be that the number of weird directions that are being explored are definitely not zero, and arguably should be much higher than the baseline that we have today. And it's for two reasons. One is the fact that, if we had a God's-eye view of figuring out how to allocate the resources appropriately — if we had the God's-eye view that allowed us to be better Marxists, for lack of a better word; you know, ‘from each according to their ability, to each according to their need' — if the first part of that could be actually analyzed and done properly, then I would probably say, “Yeah, let's do that. Let's analyze where people can contribute best and figure out a system whereby we will actually enable that to happen. Turns out Rohit is good at X so he should go do that; Spencer's good at Y and he should go do that.” Unfortunately, that doesn't exist. And as a consequence, the best kludge that we've gotten to that works is, let's let Rohit and Spencer and everybody go off and do the things that they think are of highest potential. And if they think those are of highest potential and they're willing to stake their time, energy, money, career, reputation on the line in order to go and pursue them, then chances are, the eventual outcome on a collective basis will probably be better. Even though, in the middle, you would have people putting shrimp on treadmills and trying out all sorts of crazy stuff. Is that completely useless? I don't know. I was having this conversation earlier today about how there hasn't really been much innovation in the last decade of venture capital (looked at in one direction) because there were billions of dollars poured into 10 million grocery delivery companies. And one way of looking at it is that, yep, we threw a bunch of money at an idea that, maybe as a prior, we could have said is objectively silly, and turns out it was silly, so yeah, waste of money. The other way of looking at it is that it was one way to provide an extraordinarily large number of product managers, engineers, people who worked in these spaces, an intense familiarity with at least some parts of the world as is. They tried to solve a really hard problem and, even though it failed, they walked away with quite a lot of skills, knowledge, and a network that they are going to use elsewhere. And it spans the open-source tools that came out of these places (that we'll end up using) to the actual skill, so that we get at the end of it. And some of this (naturally perhaps) sounds a little bit like path dependency apology. I don't mean it to be quite that severe but, at the same time, I don't see that we have the ability to say at the outset, “Do not follow your curiosity and go into this place.” Because ultimately, I think — at least at the margins — we need way more of that, as opposed to way less.
SPENCER: I'm sort of shocked at the level of optimism you just expressed [laugh]. I remember a few years ago, when I started seeing all these ‘you'll get your groceries in 15 minutes' delivery companies, I was so confused about this, because while there are a few things I could imagine really needing in 15 minutes — like you have a party about to start and you're making a cake and you're like, “Oh, shit, I forgot this ingredient and I can't leave the house because people are gonna get here soon,” that kind of thing. — But I was just very baffled how that could add a lot of value to customers. I guess maybe I believe that we can, through thought and also through evidence-gathering about the world, eliminate a lot of bad ideas, even though we often can't be confident what a good idea is. But at least we can say, “Oh, wait, these are not very fruitful directions. This thing is much more promising than that thing.” And this sort of, ‘well, at least people learned a lot about the world and they learned through their mistakes' is not high enough of a bar for me.
ROHIT: No, I think that's fair. I think the best argument to agree with that is the fact that I've not actually missed any of them for precisely those reasons [laugh]. I think the question is, if I as an individual (or a professional) think this is kind of insane, and it probably shouldn't be done, and there are better ways for that capital to be spent, or people to be educated, I think that's fine. And we should probably attempt to operationalize that however we can best do it. But I think I have turned the corner to come around to the optimistic side of it, where I'm like, if you're going to go and say, “I'm going to do a GoPuff clone in Turkey” and try to raise capital, go throw it at this and see if it works, and you do that 15 times over, it probably feels like a silly use of money all around. I'm in agreement with you and, yeah, probably thinking a little bit more upfront would have been helpful. The question that I struggle with is, this is a little bit of a sliding scale. Spencer, if you told me for example, that what you're most excited about is Uber for dogs (and that's what you're excited about, curious about, where you think the highest opportunity is, where you need to go and spend time, energy, money), and because you're smart and good at this, you will go off and raise hundreds of millions of dollars in order to go off and make Uber for dogs reality, I would probably say that kind of feels like a waste. However, the question is a little bit different. The question is like, if I think it's a waste, and you don't think it is a waste, we need some way of figuring out what is the right amount of effort to put into something silly (or manifestly silly) that comes out like this. And I think the best way that we've got to, so far, is effectively independent decision making, where if you stumble across a Masa and he manages to put a couple of 100 million dollars into it, and then it all goes down the drain and later somebody higher, etc, etc. If that sequence of events happened, I think it's easy afterwards to say, “Yeah, it probably shouldn't have been done.” I just don't know that we should be so confident early on in dismissing some of the ideas that obviously turn out to be crazy, that later on turned out obviously to not be crazy. Does that make sense? I think bad crazy ideas sound awfully close to good crazy ideas. So I guess the light version of what I'm saying is, maybe we should be a little bit more humble about our ability to distinguish between those two sorts of ideas, especially when someone else is trying to go off and do it instead of us.
SPENCER: Yeah, it's an interesting point. I distinguish between different kinds of bad-seeming ideas. One kind of bad-seeming idea is an idea where, even if you succeed at your goal (which is really difficult) and you succeed at this to an incredible degree, it will be pointless, like you will not have made the world any better, or you'll only have made the world very slightly better than it would have been otherwise. That's one kind of bad idea. And I'm very happy to say those bad ideas, we should try to avoid them. There's another kind of bad-seeming idea, which is, “Wow, that will never work.” That would be like the Wright Brothers trying to build an airplane and people saying, “You're never gonna build an airplane. That's ridiculous.” But clearly, if they could build an airplane, that could have a lot of value. I think even then, the idea that “Oh, you could fly a plane,” people could probably appreciate that that would have a lot of value if it could succeed. It just seemed crazy, right? It seemed like a bad idea because it seemed improbable. So I'm a fan of improbable ideas that probably will fail but, if they succeeded, they'd be really valuable. I'm not a fan of ideas where, even if they succeeded, they wouldn't add value to the world. And I see a lot of that second kind of idea, and those are the ones I'm willing to say are bad.
ROHIT: I think, by definition, most of the ideas will fall into the latter bucket, because it almost has to, right? There aren't going to be that many ‘let's build the first flying machine' ideas around. I mean, as a percentage of ideas at any given point in time — whether they go back to Da Vinci's drawing of the helicopter that doesn't work, to Wright brothers — at any given point in time, even though everybody dreams of it, the number of people who were curious enough to try and go off and build it, has always been very small. And when we are talking about people trying to figure out what they should spend their time on, they optimize for some combination of ‘this is what I can actually see myself putting a lot of effort into,' which is almost like a first, or maybe even second, derivative of things that you're curious about. So in some ways, like I said, if your goal is to say, “Hey, you know, there's a ton of capital floating about in the world (whether it's in venture or grantmaking, or whatever); I'm going to try to get a piece of that. So I'm going to tell people what they want to hear, which will allow me to succeed in my game, which is basically some version of personal enrichment, try to create something, swing at the fences and fail. But it doesn't really matter what I'm trying because, like you said, outcomes are not that important. Sure, on an individual basis, I can't even fault people for trying to do it. Does that make the idea themselves less worthy? Unclear, because I'm not entirely sure that the types of ideas that the marginal person should be pursuing are going to be more ‘Why don't you go off and try to solve cancer' kind of questions versus ‘Why don't you try and make process X within biotech that deals with clinical trials 3% more efficient?' Because I think the latter is what creates a ton of incremental efficiencies over a long enough period of time — just because there are so many more of them — that the platform that we're standing on imperceptibly slowly creeps up. Amazon started doing the two-day delivery; I mean, does it really matter? Not really, I guess, but it matters enough in the sense that, somehow collectively, it shaves off enough pain for each of us that the value cumulatively added; I don't think it is negligible either. And as a positive externality of them actually trying to do it, we started getting AWS [Amazon Web Service], right? The cloud revolution kind of took off. That would have been unpredictable. 20 years ago, nobody would have seen it. Gosh, 10 years ago, nobody would have predicted it'll pick up this big. Maybe even five years ago.
SPENCER: There's absolutely no question that a lot of what happens is extremely hard to predict. There's so many forces that just come out of nowhere, almost nobody sees them, or just a few people recognize them before they happen, and so on. There's this idea that I come back to sometimes for which I use the phrase, ‘what's good for the hive is not as good for the bee.' And it's this idea that, imagine you're looking at society as a hive and you're like, “What's good for the hive?” Maybe you want lots of bees going around and trying different things, like trying to find food in different places. And even maybe a somewhat random pattern could actually work pretty well for the hive because they're gonna eventually explore a lot of the space. But if you're thinking about the individual bee and the bee's like, “I'm hungry, and I want to find food” or whatever (I don't know if that's actually how bees work [laugh] I don't know, maybe they have to go to the hive to get food). But let's assume this particular bee has to find his own food. That doesn't mean that the bee should go search around randomly, just because that works for the hive, right? And so I think about this, where, looking at society, it might work pretty well to have people follow their curiosity, try what seems interesting to them. But if you're an individual bee, you might be able to outperform that on average, if you think about it.
ROHIT: It's an interesting question. I'm not entirely sure I agree that what's good for the hive is not good for the bee. But I think that might be because of what I know about hives and bees, as opposed to the broader point you're making. I'll have to think about that. What you're saying is that systems can be good by providing us with benefits overall, even if individually, they can be bad for the person.
SPENCER: Or the individual can outperform by using a different strategy than the whole group is using, even if that strategy works pretty well for the group.
ROHIT: I think that is undoubtedly true. Two things that come to mind. And tell me if this triggers a thought for you as well. One is, I don't know if you've seen photos of London in the 1950s. It's kind of incredible how soot-covered literally everything was. My favorite building here is St. Christopher, and St. Paul's Cathedral is beautiful, gorgeous. And it was basically black. And I don't mean it pejoratively; it literally was covered in soot and that seemed pretty normal. From an individual standpoint, this was clearly not optimal. They could have figured out different ways of living in London or elsewhere, rather than try to make it in the big smog and die early, but presumably, people didn't because what was good for humanity as a whole was, six-day workweeks and 12-hour workdays where you had to fight like crazy to make sure children didn't have to work. So I think I agree with that in principle. The question to some extent is, what is that individual trying to do to outperform? Let me ask you a question. Does this touch towards the efficiency of the market? Is that one of the things that you're hinting at, that there exists sufficient inefficiencies that you can do things that are differentiated, and therefore have a superior advantage?
SPENCER: Well, one way to think about it is to follow your curiosity as a heuristic. It's a simple heuristic that you can tell someone, and it guides your action, right? I think at a societal level, it's not a terrible heuristic, people following their curiosity. But then if you're talking to one specific person, and you can say many more words to them, can you do better than follow your curiosity? And I suspect yes, and, you know, does this relate to the efficient market? To some extent, and that, if you think about, where there are opportunities, they are not taken. Because lots of people are looking for all kinds of opportunities, we can kind of break down different ways opportunities arise. One way opportunities arise is, if you can do something other people can't; they just can't take advantage of the opportunity. Or maybe there are other people, but there's just not that many of them so they haven't taken care of it yet. Another kind of opportunity could be a local opportunity where, because there's a local scope to it, maybe people are doing it in other places, but they haven't done it here yet. You can break it down this way and reason about how there might be interesting opportunities for you to take that are not yet taken by others. I guess what I'm really pointing out though, is that I think the ‘follow your curiosity' guideline, reasonable as it is at a broad level, can be beaten at a narrow level.
ROHIT: I don't disagree, with maybe two caveats. One is, like I said, a large number of people that at least I see. And bear in mind, this is perhaps overrepresented by tech people or investors; they definitely under-index on curiosity. So if you're coming into venture capital, and want to do really well, one of the questions that I normally ask (not that I have a crystal ball) is, what are you like? What actually are you curious about? Because that's one of the best predictors of, is he or she actually interested in this? Are they going to spend enough time on this or do they think about this deeply enough? is this going to be a part of their daily life, so to speak, in a way that surpasses the hours that they spend on the job, or the list that they look at, or the startups that they speak with? Is that essential? I think it is, but there's no way for me to prove it. The N is too small here. However, what I do know is that a lot of the really smart people that I see, have instincts that tell them to go to places, analyses that tell them that there is an empty spot here where I should go spend my time, at a very high level that is fairly useless once it comes to the individual. So you might say — I'm making this up — but like, “Hey, technology feels like it is changing the world in some meaningful sense. Biology feels like it's within technology. There are many more things that are actually shifting the world. I should actually go and spend my time on bio,” or “I might need to go spend time on AI” or whatever. And if this acts as a spur — and if there is genuine curiosity on their part to actually go and play with it, create something, mess with it, do things that don't scale, so to speak, or actually engage with that problem in order to dive a little bit deeper — yes, that can actually help. However, if you stop at that level, what I kind of call it the McKinsey 30,000-foot level and say, “Great, the cutting edge stuff that is coming across are these five things. That's why I want to be an investor. That's where I want to spend my career.” Great. You could have replaced that analysis. 10 years ago, you would have gone with finance and oil and gas. Things would change, the analysis would stay the same. It doesn't tell you anything about what you actually ought to go and do. You might say, I'm good at coding, I'm good at communications, I'm good at structured analysis. Great, all those things are wonderful. But given that you have the ability to succeed in a bunch of different types of fields (were you to spend enough time and effort on it), where should you actually go and spend your time? And it's not clear to me that that is a question that can be answered analytically, top down. At least not to any meaningful degree without actually using your individual curiosity as a way to funnel things down. And if it turns out that you're curious about five things that you're happily going to be able to spend your time on, but it turns out only two of them are interesting for the world at large (because one turns out to be beat poetry), great, so be it. You still have to use some form of a heuristic in order to slice that down, that is able to get you out of the analytics trap, where effectively you're, in some ways, trying to solve the world. And because you don't have the God's-eye view, you're gonna end up making simplifying assumptions, and ending up in a position where you're not perhaps very happy with what you're doing, which seems to me to describe an insanely large number of very smart, very capable people in the world.
SPENCER: What you said reminds me of something, which is, if you read the biographies of just incredibly successful people, it seems like way more often than is normal, they have an obsessive focus. For example, Thomas Edison was just an obsessive tinkerer; all he wanted to do was just tinker on devices and try to get them to do things and try to create devices that did things nothing else had done. Or you take Walt Disney, and he just was kind of obsessive about animation and drawing, even in his youth, and he just had that obsessive focus where it's like, “Oh, clearly, this person is just going to do nothing else,” and I don't know how replicable that is, but I think it's a fascinating observation.
ROHIT: I completely agree. I think this is one of the things that kind of started tipping me off a little bit towards this, because despite what we might, or what I believed (at one point anyway), most successful people are highly different to each other, which kind of is one way of saying, they were all obsessed about their unique things, and they were willing to go through hell or high water to get any kind of clarification in that thing. And to some extent, they're able to do that because they are able to look at a certain subject area, or a problem perhaps, and say, “That is my problem. I'm willing to spend decades on this particular problem.” And I'm not entirely sure that that kind of obsessive analysis, ability to withstand maybe a lot of fallow time (times when you're not actually making much progress), without there being something a little bit more intrinsic, perhaps incalculable, that is pushing you. It doesn't quite stand, perhaps, for things like social movements. I don't know, I haven't thought about that much, but it definitely stands true for the rest of them.
SPENCER: Yeah, one thing that strikes me about it is that it feels like it often goes beyond curiosity to some kind of obsession, or almost bordering on mental illness. It's less like they're following their curiosity and more like they can't help themselves, they have to do this thing.
ROHIT: Yeah, I think that is the crazier end of the same spectrum, as it were. I'm not saying everybody needs to be crazy about something because (a.) I don't think a lot of that is in your control, to your point. And (b.) I don't even know that you can choose it necessarily. It's kind of a little bit of something that is some combination of intrinsic interest that compounds over time, until it explodes as an obsession that you can spend a lot of time over. You have to have a lot of stuff taken care of, for you to be happily obsessed about some topic for a few decades; I don't think everyone can necessarily jump at it. However, everyone can at least try and understand the things that they're most interested in, and then try to spend a little bit more time steering towards that, rather than the more (forgive the term) but like the utilitarian ethos of saying, “This is what seems like the best opportunity for me to do and therefore I'm gonna go do it.” You know, just like Dustin Hoffman was told (in Mrs. Robinson), that plastics was the future. And imagine if he decided to go off and spend most of his time on that. A lot of decisions are made with roughly as much forethought. Looking at a giant macro trend and trying to jump in and say, “Yep, that sounds sensible. That's where I need to go off and spend my time.” I just feel, at least in the modern world, we don't need to settle so much.
SPENCER: All right, so jumping into another topic now. Let's talk about creating your own worldview. Where do you want to start us off with that?
ROHIT: I feel like almost everyone looks at the world in some combination of unique ways, some of it their own, some of it borrowed, some of it stolen. There's a combination of views that they actually have towards the world. The ‘philosophies,' you can call them, or Munger's mental models, what have you. In some ways, I found that it's highly useful to have an attempt to synthesize this for yourself, so that you at least know how you feel about certain aspects of the world and to be a little bit mindful of how you want to address certain issues when they come in front of you. Because one of the things that I was finding (when I first started writing, for example), is there are a bunch of different types of topics that are interesting to me. And if you were to ask me what unites them, I wouldn't have a good answer. I still don't really have a good answer, but I'm starting to have a slightly better answer about how I think about the connections between these things, or if there is an underlying view that links all of them. And to some extent, that feels like the creation of a worldview, which not enough people (I feel) spend enough time obsessing on. Some of it is because they just feel like it's unnecessary. You're trading off internal coherence amongst your views — reduce variance perhaps, or reduce surprise levels — which is maybe a little bit fair. But to me, the extra analytical benefit that it gives you — in terms of trying to understand your own thought processes and the metaphors that you keep using, or the things that you keep reaching out to — is highly useful, because one of the ways that I think about it is that ultimately, we are all trying to create some form of a knowledge tree inside our heads and we want to figure out the structure of that tree. So when a new fact comes in, or a new theory comes in, a new hypothesis comes in, we kind of know where it sits and what else it actually relates to. And I don't think you can do that unless you actually have a little bit of internal effort in trying to create that tree once or twice, not with the intention that it will be perfect and the best philosophical work that somebody creates; it doesn't need to be Tractatus. But at the same time, doing it definitely makes you better at identifying the limits of your knowledge and the things that you're most excited or obsessed about.
SPENCER: So how would you describe what your worldview is?
ROHIT: I think about the world a little bit as networks. I think of most things in terms of networks or complex systems, and that has helped me personally quite a bit. So one way that it manifests is that — almost stepping out of the philosophical roots — anything that actually happens, I like thinking of this as different nodes, making actions, creating edges between them, and those edges actually shifting over a period of time, so that, in the very grand sense, there is some multi-dimensional lattice of cause and effect that is actually linked together.
SPENCER: Could you give an example?
ROHIT: Yeah, the tangible example here might be like thinking about the economy. So I studied as an economist, lots of equations, a fair bit of coding, etc., and for a long time, at least, it didn't make a huge amount of sense to me. What helped it make more sense is to actually stop thinking about it as, this is microeconomics, this is game theory, this is macro, etc., etc., and instead, step away from that and start thinking a little bit holistically about: there are multiple nodes, which are individual actors, who can make individual decisions. There are edges created when there's an action between two types of nodes. And now let's think through how actually any piece of information or money (or whatever) flows within those edges. And if you create boundaries within that multi-dimensional lattice, all of a sudden, you can start slicing off little mini-universes and analyzing them. So you would say, for example, “How should we think about inflation?” There's a bunch of different ways or models that we have in terms of thinking about that. The one way I personally think about this is, there is a certain supply of a bunch of different types of goods that is actually coming in. As each one of these is coming in, we have individual choices or decisions that we're making that figure out: if the price increases, we do X; if the price decreases, we do Y. Once we start actually making those decisions, those trickle up or trickle down into the price setting that the firm actually needs to make. Once the price setting that the firm needs to make is decided, that trickles up or trickles down into the profit that the firm makes, or the dividends that it gives or the asset prices that come out as a result of it. And interest rate as set by the Central Bank, for example, is one factor that moves this complex web in a bunch of different ways. We have the conversation about how much lag is important. The way I think about that is, how fast or slow does information actually flow through this particular web? And the helpful part for me is that, number one, it provides a reasonably clear mental model in terms of — as I'm describing — I can actually see the different bits lighting up, or like the way the information actually flows from point A to point B, which is helpful. But number two, it also helps you realize a little bit that the model that you're describing is only one part of the reality. Normally, when you describe a model, there might be an error term in there. And the rest of it, you say, “Yep, this is just the way it is.” But to me, this actually helps because, actually, it's not that there's an error term in any particular model; it's just that you haven't necessarily taken into account of all of the different cause and effect that actually leaks out of the model and steps into the rest of the world that is connected up with it. As I'm saying it, I don't know whether this is highly esoteric, or just something that I obsess about, but it definitely helps me in terms of understanding the world a bit better.
SPENCER: It's interesting, because the way you use ‘worldview', I would describe what you're talking about a little bit differently as a mental model or framework. This mental model of a network diagram with nodes and edges. And that's a very powerful model you can use on many different topics like supply and demand, and I'm sure, hundreds or thousands of other things; whereas, I think of a worldview a little bit differently. It's like a central belief about how things function. Would you say that what you're talking about more are these powerful frameworks that you can apply in many situations, or more like these principles about how the world operates? An example worldview might be, if you work hard, you'll eventually succeed, or something like that. That's how I think about a worldview.
ROHIT: Yeah, I think you're probably correct in terms of the characterization. I do think they link together quite closely. For example, there's a trilemma that I liked a while back, which is the incompetence/malice/bureaucracy trilemma. When something goes wrong, I like to figure out which one of these things is actually happening, just as a way to kind of slice the world into slightly more understandable vectors, and figure out I don't necessarily need to directly jump at incompetence all the time. Sometimes it might just be bureaucracy; sometimes it might just be malice. It doesn't always need to be one or the other. And to me, this was most useful, at least in terms of trying to understand how to deal with paperwork, which you have to do quite a lot when you're in India, or even the UK. The reason I don't think of it purely as a mental model is because I feel that, at least the way I use it, it seems to sit at the Ur-model level. I mean, it seems to sit one level above it. And part of the reason I think it does is because, at some rooted stage, this informs a huge number of different things that you're actually linking it up into. You can take it that this is how you see (whatever) business world competition, this is how you see the investment world, the role that finance actually plays in it, the egregores that actually surround us; this is how you see what we were just talking about, whether or not you need to let curiosity actually be the guide. Because if you're living in a very large, very complex landscape, and you don't have sufficient computational ability to try and figure out the cause and effect of the immediately available parts that is chosen to you, then you have to use some heuristics so that, collectively, we can make some decisions. Then it made me at least a lot more happy with going away from a purely rule-based system to something that is slightly more heuristic-based. That's why I call it a worldview. But I'm happy to accept amendments to that phrasing just because, honestly, I don't really know what to call it.
SPENCER: Would it be fair to say it's like a default mental model? Rather than just playing the role of a normal mental model that you might use in a particular circumstance, it's your go-to mental model that helps you make sense of much of the world that you sit with naturally.
ROHIT: Yeah, I think that's actually a great way of putting it.
SPENCER: Why do you feel that people should have these? Because it seems to me, most people probably don't have something like this.
ROHIT: Two reasons. One is that it provides a certain level of coherence to the stuff that you actually think, which I think is useful in having conversations. It's helpful, secondly, because it provides you something that you can actually put forth relatively concretely and therefore change your mind on the basis of. The reason something like this is, I think, necessary is because it helps you learn what you know, or at least connect up the different things that you think you know (or you think you feel) into a slightly more cohesive way of putting things together. And at least one of the problems that I have observed since writing, reading, etc., is that a lot of people are highly confident about their opinions within their silos without really a clear view of where that particular opinion ends, or where the efficacy of that opinion actually comes from. And thinking a little bit about internal coherence (not too much, perhaps) is usually helpful in trying to test that a little bit. And this falls as true in the world of economics — where rigor is extraordinarily prized — as it does in the world of politics, where opinions are as plenty as they're wrong, as they're right. Everybody can pick their own particular battles, right? There's plenty of them, at least in the cultural world, to choose from.
SPENCER: What are some other default mental models (if we want to call them that), that you see people use?
ROHIT: Religion is a wonderful one, because it provides the Ur-story that allows you to see the world as a linked sequence of cause and effect that actually has either purposeful evolution towards something good, or at least provides a story for your life. I think it provides actually a pretty useful, (and dare I say) all-pervasive model that kind of goes through. In terms of specific ones, I see reductionism as extremely popular, which perhaps, I think, might be what kind of my model grew up fighting a little bit against. It might be best phrased as, if you think hard enough, you can itemize this particular process so that you can understand all of its moving parts, which works for a certain set of physical phenomena really well, and it works really badly for a large set of social phenomena, and pretty much every other emergent phenomena.
SPENCER: Where do you see your model improving on reductionism?
ROHIT: This is going to be a high claim that I feel like a lot of academics are going to come after me for. For me, at least, it is helpful because such a large percentage of issues that we normally talk about — stuff that we talked about earlier: curiosity, where should people spend their time, whether it's with the science, technology, progress — these are not problems that are easily amenable to analytical solutions. These are problems that are perhaps slightly more amenable to empirical solutions from a computational point of view, like you can simulate them over a period of time and then try to find out some answers from that. And to me, it made me at least a lot more confident about the latter approach for a larger number of problems — or even a heuristics-based approach for a larger number of problems — than the implicit belief that you can actually slice things finely enough and find out its moving parts and therefore figure out how these things actually work. It definitely doesn't work all that well for economics, doesn't particularly work that well even for biology. And outside of physics — even in physics in a lot of different places (thermodynamics being one example, or emergent phenomena, generally being another) — we have difficulty dealing with them through reductionist methods. So that would be one place where I feel we have the potential to do far better.
SPENCER: Do you think of reductionism as underestimating the importance of context and situation?
ROHIT: Yeah, I think that's a good way of putting it. I think, generally speaking, reductionism, in some ways, is a way of saying that any phenomena that actually happens, it comes from the interaction of a bunch of smaller parts that sit underneath it. And therefore, if you understand the interaction of the smaller parts underneath it, we can understand the broader phenomenon, which does work wonders, right? I mean, it does work quite well. This was at least the belief even back from Descartes's day where you could start thinking about animals as automatons. And once you understand the automatons and how they actually moved around, then you suddenly figured out, that's what animals work like. And what we're learning is that there are a bunch of different levels of emergent behavior phenomena that actually exist, whether it's in biological sciences or social sciences. And the only way that we can actually get a grasp on some of these things is, perhaps to step a little bit away from reductionism to something slightly closer to emergent phenomena. It's best seen in a whole bunch of different complex adaptive systems. And I know I'm repeating myself, but economics is probably one of the best ways to see it. We've had different versions of that through history as well, in some different ways. Whether it's the extra physical stuff that we always thought might exist, that creates the mind (which kind of leads to the modern theory of hard problem of consciousness) whether it's a version of cellular automata analysis that exists today, coming from Conway's Game of Life all the way to Wolfram's different analyses. I think there's a bit of a link there where there's a set of stuff where, yes, individually they might have simple interactions at the very base level. But if you want to analyze them or understand the behavior of the system at the higher level, you do need to perhaps step away from the smallest level where things work, and perhaps have different tools to analyze it at different levels.
SPENCER: Yeah, it's fascinating how even if something is composed of smaller parts, it can become harder to analyze at the lower level. Water, I think, is a really good example where you can think about water as a whole bunch of atoms colliding with each other but it's incredibly hard to model as that. But if you consider it as a continuous fluid — so you imagine that it's not like lots of little tiny atoms colliding but you imagine it's an infinitely divisible thing like curve, or wave, I guess is more accurate — then it suddenly actually becomes easier to model. And so there's these different levels of description, some of which makes it really hard to understand the thing, some which makes it really easy. And going to a lower level doesn't always improve your understanding; sometimes it makes it actually more difficult.
ROHIT: Yeah, I think that makes sense. I think water is a perfect example because turbulence is famously one of the problems that is still incredibly hard to do anything more than, here is an empirical solution that works within these parameters. Beyond that, there is no way that we actually know today to actually figure out how this behavior can be modeled, predicted, whether that is from whatever. The flow within the pipes that actually brings it to our house to the great red spot on Jupiter. I think similar questions actually exist across the board.
SPENCER: I think there can be this tendency to think that the lower-level things are more real than the higher-level things. You're like, “Well, what is a duck, really? It's made of a bunch of atoms. That's really what it is, right?” A duck is just a concept we have. But then it's like, “Well, what is an atom?” An atom is also just a model, right? It's got a proton and neutron, and it's like, “Oh, okay, really, it's protons, neutrons,” and they're like, “Wait, what is a proton? That's just a model.” And then you're like, “Well, maybe that's just made of quarks.” At almost every level of description, you're just talking about different models. And yes, the models, they get more and more fine-grained, but it's not clear at what point are you actually talking about the real stuff. Even if you keep subdividing all the way down, then it's like, well, but is that thing...even if you're talking about quarks, are those more real than, say, the fields that are these complex fields that make up everything? And now we're back at the high-level description. So yeah, I've been taking more seriously the idea that everything we understand is a model. It's models all the way down. Whatever is really out there, the real stuff, there's just lots and lots of ways to look at it.
ROHIT: Yeah, I think this is the — going back to the worldview comment — I think, the way I at least see it is, the models exist as waves for us to understand parts of the elephant. It's the old story of five blind men and an elephant, and they all touch it, and each one thinks that the elephant is a really large trunk, or a really wavy tail, or whatever. And we can use a bunch of models to understand different parts of the phenomena, not with 100% certainty of any of them. But together, they start telling us a little bit more about the phenomena. But I think there is an intrinsic drive to say, “No, no, we understand how the most basic thing actually works. Therefore, that's the real stuff and anything that is actually emergent is not real,” which doesn't ring true to our lived life in any meaningful sense. There's a certain scale at which most of our life actually occurs, and things within that scale are the most real to us. And whether that scale is impacted by highly emergent social phenomena, or highly non-emergent physical phenomena like (I don't know) an asteroid striking us, I think they both have extremely real and tangible impacts on our lives. And the degree of abstraction within which they sit doesn't really matter as much as the effects they actually have on our lives.
SPENCER: All right, so final topic I'm gonna discuss with you today. Let's talk about credentialing and how credentialing has changed. Do you want to give us a quick intro on that?
ROHIT: Sure. My interest in this topic started a little bit when I started looking at the highly non-controversial question of, should one go to college? And there's a huge amount of conversation that exists around this topic primarily to do with the answers of, “Hell, of course, yes.” We should actually have everybody go to college and the problems that we have in life or society entirely exist because of — pick your poison — not enough people are going to college, not enough people are going to liberal college, not enough people are going to STEM education type stuff, etc., etc. That's on the one hand; the second hand is, we look at the actual conversations that people have of those people who have actually gone to college and there's mixed answers that come out of it. On the vocational side about getting jobs, it's for others highly helpful answers that pop out of it. And for the experience of, did you actually learn enough useful stuff, there's mixed results that come off the back of it. So the question to me was like, of course, even if you don't go entirely to Robin Hanson's kind of extreme, there is a credentialing part of it, there's a signaling F value to actually going to college. Great, that's fine. The interesting question is that this value seems to have declined, at least over the past few years (past couple of decades, perhaps) compared to what it was like before. And one way to think about this is like, Okay, if you wanted to go to college in the 70s or 80s, (a.) you could actually go to college, it's much easier; (b.) going to the colleges (at least the top colleges, for example) had a disproportionate payoff in terms of what you would be able to do afterwards. And if we ask several of the top scientists, academics, etc., they will tell you, considering their backgrounds, they wouldn't have gotten into those colleges (where they're Professor Emeritus), as an undergrad or postgrad today. So in some ways, the question that fascinated me was, as more and more of us have started getting credentials, the value of credentials has decreased in some sense, even though everybody agrees that credentials are extremely important. And the thoughts that resulted from it are twofold. Number one is the fact that there is a scale problem here. If you have a much fewer number of people who are interested in getting credentials, then fine, they work pretty well as a gateway mechanism for you to understand something about who they are, their drive, innate ability, interest, etc., and the credentials work as intended. As the strategy of getting a credential gets known more and more, all of a sudden, people understand, “Hey, if I go and get that credential, suddenly this other world entirely opens up to me.” And all of a sudden, we're moving away from ‘should I go off and get a credential' to a world where you're in a rat race or a Red Queen race to get ever more credentials, or use the credential but still jump through 15 hoops in order to go off and do it. There have been a few threads that are popping around about, somebody's got a 169 out of 170 in their quant GRE but should they actually retake it because the current Ph.D. programs are so insanely competitive, which is the flip side of what happens when credentials actually take over the world. So we're starting to see cracks in the system. In terms of education, all of a sudden, if you're in a startup and you want to hire engineers, it's not necessary that the credential of them having gone to Stanford or Berkeley is all that helpful or necessary even. You would much rather look at other indicators that truly measure the quality instead of the credential, which has become good-hearted. If you're interested in funding great research, it's starting to be the case that you're getting a few more genius grants and things like that, where you start looking at more illegible metrics about the person's interest, drive, curiosity, ability to come up with new cool ideas and test them, etc., rather than just seeing, have they jumped through the 75 hoops that are required for them to be successfully placed into a tenure-track position? And on and on and on. So credentials started out as an answer to the question, “Okay, as we're having more people, what is the best way for us to figure out who's actually good? How do we assess merit?” And as a much larger number of people went off and learned about the strategy and started getting credentials, our measurement systems started getting basically broken down because, in some ways, they got good-hearted. And as a consequence, today, we're coming back around to a world where the credentials that used to exist have less cachet than they used to, and more and more, we have to find different ways of slicing this problem, and finding talent has become much harder.
SPENCER: It seems to me that the cracks that are forming, as you say, are really non-uniform, right? In software engineering, I think it's increasingly accepted that you don't necessarily need a college degree to be a good software engineer. And if you can point to some GitHub projects where you did really impressive work, you might be able to get a job as a software engineer. But I think that, in a lot of industries, they still are going to look for a college degree as a standard thing. And if you went to Harvard, they're going to view that as a huge bonus. And there's still, I think, a number of top firms that only recruit from a few Ivy Leagues, basically. So I'm wondering, do you think that's right, that it's actually...yes, cracks are forming, but it's very non-uniform and there's still many parts of the system where the college degree is absolutely expected?
ROHIT: 100%. I think, almost by definition, this won't (a.) happen overnight or (b.) happen all that fast but, that it is happening itself is the beginning of a little bit of a U-turn. I don't think they're going to completely disappear, by the way. I don't think that's the point of this. I think the point is a little bit more that the purpose that it used to serve is now something that it doesn't really serve. We still end up using credentials in a lot of places where there exist no better/differentiated methods of identifying talent so this has become a little bit of a necessary evil. But, at the same time, the cost of getting some of those credentials has increased like crazy — and academia is probably the best example — but it holds true for everything, right? Where you went, where you studied, your alma mater, the job you had, suddenly you're looking at some of these things with the benefit of scale and saying, okay, if you have 1000 Harvard MBAs coming out every year, at some point, the supply is large enough that the credential by itself doesn't tell you what you need it to tell you; you probably have to dig a little bit deeper to figure out, does the person have the right kind of chops for a particular job? Software engineering is, in some ways, the easiest because it's the most legible. But even there, yeah, you could theoretically look at GitHub repos, etc., but those exist today as better examples, precisely because not everybody's going and doing the GitHub repos. So one of my theses is that, ultimately, all problems exist as problems of scale. And this is one of those cases where, as soon as people figure out that creating new GitHub repos (or contributing to open source) is one of the ways that I can actually bolster my resume, this will become the coder equivalent of going and building houses in Cambodia, where all of a sudden, everybody's leveled up by doing the same stuff. And suddenly, one extracurricular is no longer sufficient. But you need to do 2, 3, 5, in order to get into some of these institutions.
SPENCER: Yeah, it's interesting, the difference between a non-fakeable signal and a fakeable signal. A non-fakeable signal would be something where you can directly witness it, right? Like someone claims they're a good chef, they make a feast, you eat it, and you're like, “Wow, that's good,” and you can't really fake it. Well, I suppose they could do it fraudulently; they could have actually bought it from a restaurant. But let's say they didn't commit fraud, they actually cooked it. Then there's fakeable signals, like things where you could get a college degree, but maybe you got in for bad reasons. You didn't get in due to talent, like the scandals where people would pay to get their kid on the softball team or something, as a backdoor into college. And maybe you got shitty grades, but you still have a degree from a really good university. Insofar as we can construct non-fakeable signals, it seems like it somewhat gets around this problem. Curious to hear your reaction to that.
ROHIT: I think non-fakeable is doing a lot of work there. I think the best we can probably say is that it won't get faked immediately.
SPENCER: Maybe there's a spectrum, harder to fake. There's a spectrum there.
ROHIT: Yeah, I think short-term illegible is probably a better term in some ways. You have the proof of work-equivalent stuff, where you've done this thing before, on a small scale, separately, etc., and therefore you're able to assess the talent more directly. But I think the problem is one level higher. You had to hire 10 people; you could use whatever methods seemed most useful for you. You can figure out, ask your network or get them all to do a coding challenge; coding is probably not the only one. But you can figure out different specific specialized hoops that have higher predictability. The question is, whether any of those actually work once you scale it up to the stage where you need to hire a thousand or ten thousand of those people. Because if you need to hire 10,000, you're going to look at 5x that application and, all of a sudden, you probably just do not have enough sufficient compute capacity, so to speak, in order to read through those and come up with an answer. So the more stringent you make your selection criteria, the more you fall prey to bugs and paradox, and effectively start anti-selecting against the traits that you actually want, which is a reasonably well-known problem. As you start having more and more scale, what do you do? You need to have some metrics that seemed ‘ungameable,' at least in the beginning, but over a period of time, no matter what you do, people will figure out that the target that you want, and the selection criteria that you're using, are not necessarily congruent, and they'll try and game that. I think it's a little bit of a constant battle. It's like cybersecurity; ‘you level up and the other side levels up' kind of thing. So you do need to change it every now and then as well.
SPENCER: I think these standardized tests are a fascinating example of this, where in the US, you've got the SAT and the ACT, and you can take these to apply to colleges. And the interesting dynamic going on where, on the one hand, the test creators say these can't be gamed, like this is just a fundamental skill that people have or an aptitude that we're measuring. And the literature on IQ tends to at least claim to back that up, that things like IQ are very hard to improve if there's any way to do it at all. On the other hand, you have all these companies that claim that they'll raise your score by teaching you tricks and having you practice and so on. People spend a huge amount of money hiring these companies to help them get higher scores, or help their children get higher scores. And now you have this phenomenon where a bunch of colleges are saying you don't actually have to submit these standardized tests anymore. In some cases, they might have been saying, “We just don't even want to look at them.” But in others they're saying, “Well, you can submit them if you want, but you don't need to.” So I'm curious to hear how you see that as fitting in this?
ROHIT: Yeah, I think the US is effectively like (I don't know) 20 years behind India in this case, because India is run on standardized tests. At the end of your high school, you write a bunch of college entrance exams and that determines the schools you go into, the most prestigious one being the IIT. I remember after high school, you write, effectively, a couple of standardized tests to get in. And there's specialized tests later, but the standardized test is a big component of it. And this is like 20 years ago, but people most definitely were so clear that the standardized test is all that matters, that they would routinely basically not show up to class for the last couple of years of school, just because they would just go to coaching centers and study for the test. I know so many people who did that. There are people who, at the end of high school, will take maybe a one- or two-year break to go off and study specifically for the test. Because getting a good rank and going to a good school versus not, had a very high delta in terms of your life skills. So what does this mean? I guess point number one is, of course, you can get better at tests by studying. I think I would be very suspicious of the unfakeable or ‘ungameable' claim that people are making. It's a test, especially a standardized test; if you just do enough of them, you start getting better at them. And there's a trick — they change it every year and the questions change, the types of questions change, etc. — but the delta is often smaller than your ability to learn. I'm not saying don't do them; absolutely, you should. But at the same time, relying on that as a criterion works until people start spending huge sums of money trying to go to coaching classes, taking time out of school in order to learn just this one skill. And all of a sudden, what would have been a pretty good measurement device has all of a sudden become fairly good-hearted, and become a proxy for the ability or effort that you're actually putting into learning for the test, which again, is a proxy for a whole bunch of different things.
SPENCER: Yeah, one of the really interesting things in the academic literature on IQ, is that there's a lot of claims that IQ can't be raised, or the people don't know how to raise it. But on specific tests, people can get better scores, right? So if you're practicing just one very specific type of IQ tests — like a vocab test, those are sometimes included in IQ tests — clearly you can improve your vocab; or memory tests, people can practice and get better at memory. There's lots of tricks for that, and so on. So it's this funny thing, where this claim that the thing itself (IQ or whatever) can't be raised, but on each individual way of measuring it, you can learn the game and it's interesting that dichotomies are being claimed there simultaneously.
ROHIT: Yeah, absolutely. I think, ultimately, what we want from people and what the various credentials and tests actually tell us are correlated until they're not. And they're correlated when they're not actively being gamed. And the actual test itself? Of course you can improve your score over a period of time, even if you can't improve your IQ. But that's only true because the IQ test is not one that people are actually actively trying to game. If suddenly everybody had to take the MENSA test, then that would be the one that people started working much harder towards. And in a few years, all of a sudden, that becomes less useful as a predictor of where you will be sent. Now, I don't even think this is all that bad. It just means that you have to be consistently vigilant about what tools you are using to hire people or to select people, and to be sure that the markers that you're using to select stand the test of time, so that you are able to kind of skip through. If, all of a sudden, a particular measure or a particular test became useless or less predictive, you're able to move past into something else rather than being stuck in its legacy over a long period of time. Because nothing that we want is necessarily things that are predicted by the test, right? If you're going to give tenure to an academic, you want to give it to somebody so that she's able to do cutting-edge research and be able to get a Nobel Prize, or change the face of humanity or understand or discover things that have never been understood or discovered before. What is the best correlate of that? I don't know, nobody actually really knows. We thought it would have been innate ability that you could have measured against your GPA. Does that work? Not really. I mean, it does work in the sense that you probably don't want to give it to somebody with a GPA of two but, at some point very quickly, it becomes less correlated and, pretty soon, it gets anti-correlated because you get people who suddenly have 4.0 GPAs and it turns out, they're really good at studying to the test, but not so good at actually creating great discoveries. These things become anti-correlated. You want people who will build the most innovative, exciting companies of the future, and those are the people you want to fund. Great. There was a time that smart people went to Stanford and, all of a sudden, you could just fund Stanford engineers and that works for you. But does that work any longer? Not quite, because Stanford engineers realized that there were shortcuts to raise millions of dollars. And after a little bit of time, where that fell down the drain, you're going to have to find new ways of figuring out where the talent actually exists and people's ability to create an innovative startup. So, anti-correlation at the top end is true; Buxton's paradox was real. So measures do need to keep changing.
SPENCER: Do you view it as just the life cycle of credentialing that a new credentialing procedure will come into existence, it will work for a while, it will eventually get gamed, and then a new credential has to be introduced to replace it?
ROHIT: Yeah, I think it's one of those inevitable trends that will keep cycling over a period of time.
SPENCER: Right. Thanks so much for coming out. This was a fascinating conversation.
ROHIT: Thanks, man. Thanks for having me.
JOSH: A listener asks, "What's a lesson you've learned that's been hardest to accept?"
SPENCER: I think for me a lesson that was hard to accept is that there were some people in my life when I was younger who I think are harmful people — I think they have character traits that cause them to tend to harm people — and I think it took me too long to really accept that. And now, I'm much more careful about who I let become close to me because I think it's really, really important to not let people get close to you if they're the sort of person that tends to be harmful.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms: