with Spencer Greenberg
the podcast about ideas that matter

Episode 058: Risk-Driven Development and Decentralization (with Satvik Beri)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

July 12, 2021

What is risk-driven development? How should we weigh advice, best practices, and common sense in a domain? What makes some feedback loops better than others? What's the best way to take System 2 knowledge and convert it to System 1 intuition? What are forward-chaining and backward-chaining? When is it best to use one over the other? What are the advantages and disadvantages of centralization and decentralization?

Satvik Beri is a cofounder and head of Data Science at Temple Capital, a quantitative hedge fund specializing in cryptocurrency. He is a big believer in the theory of constraints, and he has a background helping companies find and eliminate major development bottlenecks. Some of his interests include machine learning, functional programming, and mentorship. You can reach him at

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Satvik Beri about risk-driven development, the value of networking and body language, learning through simulation, and the cost and benefit of centralization in decision making.

SPENCER: Satvik, welcome. It's great to have you here.

SATVIK: Thanks. It's good to be here.

SPENCER: The first question I wanted to ask you about is this idea of risk-driven development. What is that? How can we use that?

SATVIK: I realized that a lot of projects that I was interested in, such as research or starting a startup or building a software product, you could do a lot of work, and eventually, you would learn something that invalidated a lot of that work. And so, a lot of these projects would end up either taking a lot longer than expected or would end up failing entirely.

SPENCER: Right. This question, could you have figured out that much earlier, right?

SATVIK: Exactly. I realized, for the most part, in most of these domains, people weren't really prioritizing that. Or maybe in one or two specific cases, they were prioritizing some specific risks, but often the risks are different across each project.

SPENCER: What would this idea of using risk-driven development look like? Could you give an example?

SATVIK: As an example, say you're interested in a career change — I was actually early in my career, and I think for many people, I was exploring a lot of different paths. So a really inefficient way to do it might be to just take another job for a year. (That's what I did, and it was not very effective.) [Laughs] I learned a lot, certainly, but I could have learned that I didn't like, say, product management very much probably in two weeks of dedicated research instead of a year of work. So a way to apply risk-driven development is to ask in this case, "What are my biggest uncertainties about this?" and "What's the smallest thing I can do to alleviate the most uncertainty?"

SPENCER: What would you say it was in that case?

SATVIK: In that case, I simply didn't realize how much time I would have to spend presenting, and how little I enjoyed that. I could have probably found this out by talking to three people, asking them how much time they spend on different activities, noticing that presenting was a big one, and then doing more presentations within my current role at the company, rather than switching to product management immediately.

SPENCER: Got it. If you kind of did due diligence to talk to people in that role and figured out what it really involves, you would have realized, "Oh, wait, there's this big risk, which is having a lot of presentations. I don't necessarily know how I feel about that, so let me go gather evidence on that by maybe practicing doing some presentations and see if that's actually something I want to be part of my job."

SATVIK: I probably was actually pretty aware that I wouldn't like to spend that much time on presentations, but I simply wasn't aware that it would be such a big part of the role.

SPENCER: Got it. Then how would you generalize this idea? How can people apply this in other domains?

SATVIK: I think in software development, there's this concept of agile. One version of it is you want to get a product in front of users as soon as possible in order to get their feedback. I think that's really great if your biggest risk is user feedback. If you think your biggest risk is something else, such as your application falling over when it processes a large amount of data, then you probably want to prioritize a prototype on large data before showing it to customers.

SPENCER: I think that's a really good example, because that advice – build the first prototype as fast as possible and just get it in front of users – seems a really good advice on average, because a lot of people have a bias towards not getting a lot of user feedback. But as you say, if you think more deeply about it, it's like, "Well, that kind of feedback answers certain types of questions that might be important a lot of the time, but there may be all these other questions." So for example, sometimes the biggest risk is technical risks, like, "Can you actually build the thing to do what you claim?" Not so much. "Will users like it?" To give an example of that, imagine that your startup is trying to cure all forms of cancer. There, the question is not, "Would users want it if I succeed?" It's, "Can we actually do this?" For any given startup, the way I think about it is there's always a great risk, but you can choose to some extent whether you have more technical risk versus more market risk, etc. so you kind of pinpoint where's the source of the risk and then try to design a plan to get more certainty around that risk as fast as possible. Is that right?

SATVIK: Absolutely. I don't think anybody would be against the idea of curing all cancer. But this actually came up in a project I had, where we literally weren't going to change the user interface at all. There, it was really clear that getting a prototype in front of users wouldn't make a difference, because we weren't trying to change that interface. Instead, when I thought about it deeply, I realized that to do this project, we would have to use a lot of libraries and technologies that we hadn't used before and that were kind of cutting edge. There was a good chance that some of them would just fall over on realistic large workloads.

SPENCER: This raises an interesting question around taking advice, because sometimes we're given advice. Let's say, an industry best practice or standard technique, and we don't actually understand the reasoning for it. Sometimes we might want to take it anyway, because it's a best practice, we assume that because people were doing it a long time, it's recommended, it's probably a good idea. There's other pieces of advice, where you really only want to do them if you kind of deeply understand the reasoning behind them. Showing your product to users immediately is kind of an industry standard best practice. And yet, as you point out, it's not always the best advice. Any thoughts on that kind of distinction between: following advice because it's a best practice versus understanding the causality?

SATVIK: I think it's a really interesting topic, and I've actually sort of put a lot of thought into how much you should weigh common sense in different domains. So one thing to consider is that a lot of this advice isn't necessarily intellectual recommendations — it's psychological. As you mentioned, a problem that most people have with user feedback isn't that they think that they don't need it, it's that they know that they need it, but they're still just not doing enough of it. So that advice isn't really meant to convince you, it's meant to change your behavior.

SPENCER: That's interesting. So by framing it as "This is a best practice that everyone does, everyone expects you to do," you can use social forces to get people to this thing that they probably know they should do anyway.

SATVIK: Yeah. I would say that, on the other hand, some pieces of advice are just more directed at getting you to make the best decision. For example, an advisor helping you figure out which major to pick for college isn't really trying to convince you to do the thing you already know as best. In most cases, they're trying to often offer information that you don't have and maybe clear up misconceptions. These are two very different kinds of advice. The amount of attention you want to pay to each one is pretty different. I think, in particular, if you think a piece of advice is mostly psychological, and you already think that you're doing the right amount of it, or you think that there's a genuinely good reason it doesn't apply to you, then you should be willing to disregard that advice pretty easily. Whereas, I would say that you should generally take advice that offers new information, new perspectives, and clears up misconceptions more seriously.

SPENCER: I see. So, it's like a heuristic that we can use to help us think about, "Should we ignore this kind of cognitive bias or standard practice?" I spend a lot of time looking at why sayings and aphorisms because — I mentioned them, because they seem sort of like universal advice. There's something about them that appeals to people who like to share them, people like to read them, and many of them are actually in the form of advice. I've thought about that, like "What's going on in there?" And it seems like this isn't true of all of them, but seems like for a lot of them — what they're trying to do is sort of nudge you psychologically in a direction that tends to be beneficial to people. There's a lot of advice in there about not giving up, a lot of advice in there about pushing through fear, and there's a lot of advice in there about making sure to pay attention to the small things in life and really enjoying the small things. It seems like these are all sort of just common directional nudges that people could benefit from.

SATVIK: Yeah, absolutely. I can't remember where I heard it from, but somebody referred to the law of equal and opposite advice – for every piece of advice about not giving up, there's also a piece of advice about knowing when to change your mind or abandon sunk costs, or pieces of advice about pushing through fear. There are also pieces of advice about listening to your instincts and taking those seriously. I think these sorts of nudges are really common, and maybe what happens is, in a culture or subculture there's one type of nudge that's very broadly needed for most people, and then eventually it gets integrated, and maybe people even go too far in that direction, then sort of the opposite advice starts appearing in that culture.

SPENCER: It seems like personality is also a huge factor. Most people are probably more on the — they let anxiety hold them back when things are not actually that dangerous. Because just in the modern world, we tend to have a lot of fear of things like public speaking, or asking for what we really want (which tends to actually be fairly harmless, but we tend to fear a lot). Whereas occasionally, people who are total thrill seekers and are actually doing all kinds of dangerous stuff, and anxiety really should stop them a lot more, but because they're a relatively small percentage of the population, the aphorisms that spread are mostly about pushing through fear.

SATVIK: Absolutely. I think that's a great point. I think it's actually the case that probably like any individual person is weird in some way, or unusual in some ways, such that some piece of commonly applied advice is bad for them. One of the things you can do for yourself, for your personality, is trying to figure out in what ways are you unusual, such that you should maybe take the opposite of the common advice.

SPENCER: Another topic I wanted to talk to you about — which actually ties in quite well with this idea of risk driven development — is the idea of designing good feedback loops. Do you want to talk a little bit about how one designs a good feedback loop. What's the purpose, and so on?

SATVIK: In anything you do repeatedly, which is to say most types of work or social interactions, or studying, you're getting some amount of feedback, you're going through a loop where each time you try something, you learn something, and then maybe you do something a bit differently. I think the default feedback loops, in many cases, for many things that we care about, aren't very good. Either they don't tell you information at the right granularity, or there's a lot of noise in that information, or they're just very slow. If you can get feedback that's precise, fast, and accurate, that'll speed up your learning and your effectiveness in a lot of domains.

SPENCER: I think there's a bunch of domains where the feedback is so bad that people just make the same mistakes over and over again. One is in applying for jobs where first of all, a lot of times people don't get any feedback on why they didn't get accepted for the job. And then often, what I've seen is when people reach out asking for feedback, a lot of times the feedback given them is kind of BS. I think it's because the companies are kind of erring on the side of not offending any candidates, not creating any legal liability, so they kind of give you some wishy washy bullshit, so you kind of make the same mistakes over and over again. Another one is in dating, where I think a lot of times people who go on dates and then the other person doesn't want a second date, they really don't get information on why, because people feel awkward about communicating that information or they worry the person will react badly. I actually once went many years ago to a speed dating event, where the format was after every speed date, each of the two parties fills out a sheet of information about their first impressions and what they thought of the other person, why they would or wouldn't want the person again at the end, then you get this kind of stack of all their thoughts in random order. You don't know who said what [laughs], and I thought that was really awesome. I think most people would actually not enjoy that, even though it probably would be good for them.

SATVIK: I absolutely agree with both of those examples, and I think with both of them, there are ways you can approach those things very differently in order to get better feedback. To take the job example, it took me six months out of college to find my first job. I applied and applied and I got a few interviews, and none of them went particularly well. I really had no idea what was going wrong. I couldn't get much feedback on it. So eventually, I started taking more of the networking approach, which a lot of people recommend, but I didn't really understand at the time why people recommend networking so much. The reason is precisely feedback. If you reach out to people, if you make a good first impression, then they'll be much more likely to speak to you. If they are impressed by you after a conversation, then they'll usually ask for your resume. In a 15-20 minute phone call, you're already getting real feedback on how you come across.

SPENCER: The feedback there is just the fact about whether they ask you for your resume after the short conversation. Is that the idea?

SATVIK: Absolutely. As opposed to with a job, you might find out a month later that you didn't get the job, or you might even just never hear back.

SPENCER: I see. Whereas here you're getting a kind of faster loop of, "Oh, I had this quick conversation and they didn't seem that interested. Okay, maybe that 10 minutes didn't go as well as I thought." My favorite metaphor for this idea of feedback is to imagine you're trying to learn how to do archery – you're firing arrows at a target, and that's how you're practicing. In one scenario, you fire an arrow, you see how close it is in the bullseye (it was to the left, it was to the right, it was too high). And then you get the next arrow, and so on. Then, imagine another scenario where just like that, except now you're blindfolded. So every time you shoot an arrow, you don't get to see where it landed. You can just imagine how it essentially is impossible to learn to be a good archer this way because you can't tell how close you are to the bullseye, or if you're missing the left or the right. As soon as the feedback loop is broken between shooting the arrow and how good it was, and in what way it was wrong, then you just can't learn.

SATVIK: Absolutely. I think a feedback loop that was pretty broken for me for a long time and for a lot of people, that has to do with social skills in general, is reading body language. To take your dating example, if your only piece of feedback is whether the person wants a second date, then you're learning very, very slowly. Whereas if you can read body language better, if you can tell how they're reacting, whether they enjoy your jokes, whether something is a turn off, then you're learning much, much faster sort of what works and what doesn't.

SPENCER: That's an interesting point. It's sort of this virtuous cycle of if you can read body language well, then you are immediately learning how people are reacting to what you're saying or doing. That actually gives you more information that you can then learn even better how to socially interact. Whereas in the opposite scenario, where you're not good at reading body language, you might make a joke or say something or do something, the person reacts negatively, you don't pick up on it, and then you just kind of never learn to do it better. So you're kind of stuck in a rut.

SATVIK: One story I had that's kind of related is that I went through a couple of years where really my main focus was improving my social skills — at that time, they were just awful. I wanted to get them to at least adequate. I tried lots of things, I spent a lot of time reading books on social skills and so on, and practicing, and they got a little better, especially in some domains. But I think what really opened my eyes was when I went to a body language class. And as part of the class, we were given I think 10 scenarios, where we were supposed to read the body language and see how the person felt and give a true or false answer.

SPENCER: It was the person actually acting out in front of you?

SATVIK: No, no, we were watching videos.

SPENCER: Okay. Got it.

SATVIK: I remember, I got all 10 questions wrong — which was astonishing, obviously. It was much worse than chance and I was the worst in the class of 30 people.

SPENCER: Interesting.

SATVIK: I had the worst body language reading scores out of 30 people. That made me realize, "Well, that's probably the problem." I just honestly have no idea how people are reacting to the things I say. And then I really focused on that. I started to notice much faster improvement.

SPENCER: How did you actually improve it? Because I think that's one of the skills that a lot of people will think either comes naturally to you or you just kind of don't get it.

SATVIK: It's actually a little difficult. Because if you don't have the feedback loop in the first place, how do you tell whether your attempts are working? What I did was, I actually watched a lot of videos of people flirting on YouTube, or just generally, people interacting, good interactions, bad ones. That made a bit of a sense, because I was being told, "Here's a social interaction that went well," and I could watch the body language and piece it together. In addition, I went to more classes, I got some direct coaching, which was very helpful. And I also did many of the standard things like reading books, trying to watch people, trying to pick up the patterns. I think that once I had a better sense of what body language patterns were common, it was easier to start associating them with different meanings.

SPENCER: It's really interesting how people like yourself, who didn't naturally develop the ability to read body language, have to start with an intellectual understanding. Whereas, many people, probably most people, they kind of just pick it up, and they never really think about it. If you ask them, "What is the value of someone using such or such a situation?" It's not even necessarily obvious to them what the answer would be, but they could just see it intuitively. Whereas for you, you had to develop a system to reflect your understanding of how it worked, then over time, you have to kind of convert that system to reflective understanding into kind of an intuitive one where you can use it in real time without thinking about it. Is that right?

SATVIK: Yeah, I'd say that's totally correct. I think there are many cases for me where I've had to try and really explicitly understand something, write it down — often things that might come naturally to other people, but by verbalizing it, by thinking about it, by writing it out, I was able to improve in those areas much faster.

SPENCER: Right. In those situations where you have to kind of develop an explicit understanding — from my point of view — the problem there is, even if you develop an explicit understanding, it doesn't necessarily mean that you can do it effectively in real life. A good example of this would be martial arts where someone could theoretically perfectly understand how to do a great punch but it doesn't mean they can do a great punch. It certainly doesn't mean they can do a great punch when someone else is attacking them, and they have to do it within a quarter of a second or something like that. What do you think about taking that kind of explicit understanding and turning it into something that you're able to apply fast enough and smoothly enough that it really works in real life.

SATVIK: Certainly, the best solution is just to practice a lot. I don't think anybody can learn how to throw a punch without throwing a lot of punches. What also works surprisingly well, in a lot of cases, is just imagining the situation, especially if it's something that's relatively rare, and you kind of want to build in the habit. Say you want to build a habit of changing the topic if somebody doesn't like a joke of yours, or asking them a question. It's going to be hard to practice that enough in real life. You might get a couple opportunities a day, but you can just imagine a lot of different scenarios where you would do it, and trying to build that habit mentally through visualization is often pretty effective in real life as well.

SPENCER: It's really cool how Olympic athletes do just that kind of stuff as well. One of my favorite examples is that there's this incredibly elite climber and he set this goal for himself to climb these incredibly difficult climbs on his first try," which is kind of unheard of. "People can't even do it on their 50th try, he's trying to be on his first try. The way he does this, actually, he'll lie down on his back after having watched many, many, many videos of people climbing it, and then in his mind, he'll imagine doing each move one by one, and then his coach actually will place his hands in the position of where the climbing hold would be for each move, he's kind of fake climbing on his coaches hands (it's totally bizarre), but he actually use this to train to climb things he's never climbed in real life. I think a lot of people just don't get that this sort of simulation of reality you can create it in your mind. Well, it's not a full data point, it may not be as good as actually doing the thing, but maybe it's a quarter of a data point or something like that, of actually doing the thing. You can do it many, many more times, and you can kind of make it as varied as you want it to be and adjust different parameters of the situation really easily. I think that's a powerful and underused technique.

SATVIK: This technique was one thing that helped me with job interviews a lot. After I practiced networking for a while and built sort of a more explicit model of what worked well in interviews and what didn't, I took a day where basically, I sat down, and I closed my eyes, and I just imagined going through hundreds of common questions and responding with certain words and certain body language. I could kind of intuitively get a sense of how an interviewer would react to that, and I practiced those common questions again, and again, until I felt I had pretty good answers to all of them. And that made a huge difference. Literally, I went from having had almost no successful interviews to having something like 70% successful interviews after that.

SPENCER: Oh, wow. A technique I think I learned from Jeff Anders that I like a lot for this kind of thing, where you're in a stressful situation where you're gonna be asked questions, is to make a list of the questions you hope they don't ask, and then just practice the hell out of those questions. And those are the worst case scenario, the worst things for them to ask you. Once you've done those to the point where you have a really smooth response, then it's like, even if they asked the hardest stuff, you're really well prepared. Chances are, they probably won't even ask that stuff anyway. Not only is it good to not get caught off guard, but I think it actually builds the confidence a lot.

SATVIK: I think there's definitely something to be said for building confidence through practicing the hardest scenarios, even scenarios that are unrealistically difficult. Certainly, in the case of interviews, where I just come up with the absolute worst questions, and practice those, but you can imagine this in the case of dating as well. I imagined a lot of scenarios where I kind of thought about what I was insecure about relative to dating, then just imagine a bunch of dates where basically, that was the first thing the person brought up. Maybe they were like, "Oh, you're a lot shorter than I thought you were" and then going through a bunch of those scenarios just made everything a lot easier.

SPENCER: Let's break down feedback loops. You mentioned these kinds of different properties they can have. I think it'd be cool to go through each of the properties one by one that make for a better rather than worse feedback loop.

SATVIK: I think the first property — may be the most obvious one — is the speed of the feedback loop.

SPENCER: So during the archer example, imagine that you were doing archery again, you're trying to learn to be really good. But this time, instead of wearing a blindfold, when you shoot an arrow, you don't get to find out where the target is until tomorrow. You have to wait 24 hours to find out where you are on target. Just think about how incredibly laborious it would be to learn to do archery well.

SATVIK: I'd make an even stronger claim, that a case where you had to wait five seconds for your arrow to hit the target is going to take longer for you to learn than a case where you only had to wait half a second.

SPENCER: It seems like it gets worse very steeply. Two minutes seems way, way worse than 20 seconds. And then an hour might be almost impossible for most people. I think when you're getting to the point where the feedback takes even an hour, you might even have to take really careful measurements. You have to be like, "Okay, how exactly were they standing, where was my bow position?" And then cross reference it against what happened an hour later or something like that. Whereas if it's within one second, the intuitive parts of your brain are kind of able to automatically do the modeling and adjustment.

SATVIK: I think this is why coaches are so important for learning sports. Because at the very beginning, it's just going to take too long to get the feedback of whether what you're doing is working. Whereas if you want to get feedback on how you're standing before you even fire the arrow, that's the kind of thing a good coach can tell you.

SPENCER: They could just keep giving you this kind of micro-feedback all the time, be like, "Oh wait, your feet are in the wrong position. Okay, adjust that. Okay, good. Oh, wait, your hands are not in position." And it can be a little annoying [laughs], but if you're trying to get really good, the more pieces of feedback per minute the better.

SATVIK: I played tennis for a while, and I think one thing you notice after a bit of practice is that you get a pretty good sense right after you hit the ball of whether it's going to go in or not. At the very start, you don't really know until you see the ball land. But pretty quickly, you develop this intuition, and you get much more instantaneous feedback.

SPENCER: That's cool. When you first start playing tennis, actually, you have to wait 10 seconds or 15 seconds for the ball to see what happened. Whereas, when you get really good, it's like you get it the moment you hit it, and you're just like, "Oh, that's out."

SATVIK: I didn't get to a very high level, just a few years in school. But yeah, you can develop that sort of faster feedback intuition very, very quickly.

SPENCER: Cool. So what's the second trait of building a good feedback loop?

SATVIK: The second trait is how granular it is. This ties back into the archery example. If all you know is whether you're hitting or not, that's useful. But it's even more useful to know if your stance was off, if your foot was in the wrong place, if your arm was drawn too far back. Getting more precise feedback on all those things is even more helpful.

SPENCER: That makes sense. I mean, even just knowing where on the target as you hit it is like a lot better than just knowing, "Did you hit the target?" So going from that binary to that extra layer of information.

SATVIK: Yes. And oftentimes, things that improved granularly also improved speed. For example, body language. In order to get a better sense of social skills, it lets you get a much better sense of which pieces of what you're saying are effective, but also it gives you much faster feedback.

SPENCER: That makes a lot of sense. I suppose it was the third trait.

SATVIK: The third trait is noise. This is essentially how random your feedback loop is. Whether it frequently gives you false negatives or false positives and whether what you're doing is correct.

SPENCER: I like to think of it as the archery sample as wind. Imagine, there's a really strong wind, and it's constantly changing directions, and you're doing archery, and it's like, "Well, you might have missed the target but maybe actually, you had a great shot, and the wind just suddenly came in and just tucked it away." So should you actually adjust your stance or not, and that's actually hard to tell if it's really windy.

SATVIK: Yeah. And in the case of giving talks, I actually had a talk. I gave in three different situations, to three different audiences (who are basically all rationalists) just in different cities, so relatively similar audiences. As far as I can tell, I did mostly the same things for each talk. One of those was just received kind of poorly – no real interest, no real energy in the room. The other two were received pretty well. That's an example of how this is just something where you have to roll the dice a few times to see which way it's going.

SPENCER: This is one of the big problems with stock market investing as well, which is that people think, "Oh, I made three trades this year. They all went up, and therefore I'm a genius." And it's like, "Well, the stock market is so noisy, and not only that, (I mean, this is kind of a more subtle point) but could there be a correlation between data points?" For example, in your case, what you're talking about was a good talk, but imagine you're trying to figure out how good a public speaker you are more broadly. Well, if all you have is data from one talk you gave, you can't really separate it out. Is it the talk you gave (like that specific talk), or is it your public speaking ability? In a sense, those three data points are correlated by virtue of being the same content. And in the stock market, so not only is there extreme noise, but there's a high correlation. Like, if you were an internet tech investor in the late 1990s, you probably would have done incredibly well no matter what you bought, because sort of the Internet stocks just went up and up and up. Similarly, if you were an internet tech stock investor in 2021, you probably would have done really badly, because it's sort of all this correlation between them that makes it even trickier to figure out if you have any skill or not.

SATVIK: Investing is something I think about a lot for my work. In my case, I'm happy with an investing strategy that's right even 50.1% of the time. It's actually really hard to tell the difference between a strategy that's right 51% of the time and 49% of the time.

SPENCER: You have to have a huge number of trades made, right?

SATVIK: Exactly. And so one of the interesting things with noisy feedback is even if in theory it's fine to have something that's just an occasional success, or a rare success, in practice, it can be very difficult to identify those sorts of low success rate things without huge datasets or without some sort of other theory behind it.

SPENCER: Given the frailties of the human mind, when we're in these highly noisy situations, we also tend to see patterns where there are none. If you're trading in the stock market, and there's like a 50:50 chance of making money or losing money on each trade, it's very easy for you to start noticing patterns, like, "Oh, when I bought these four stocks, and this way, they went up and when I bought these three, and this other way, they went down. So maybe I'm good at this and not good at it." That was like, "Okay, but do you really have enough data to tell?"

SATVIK: And related to your point before about correlation. It's easy to make a trade where, say you buy a very high variance stock that moves a lot, and say you short or sell a lower variance stock, and then you make money on that trade because the stock market in general went up. That might seem like a successful trade (it might actually be a successful trade in those conditions), but as soon as you reach a bear market, where the stocks are generally going down, that sort of trade will consistently lose money instead.

SPENCER: That's a good point. So you're basically saying pure buy volatility, you're talking about beta, really ones that tend to go up a lot when the markets and ones tend to go up last in the market, right?


SPENCER: So you're saying, If you bet in favor of the high beta stock, and you bet against the low beta stock, you've got to keep thinking you're a genius as long as the stock market's going up, but you may not even realize that's secretly what's happening is that's just a trade that works in when the markets going up. And then since the market flips around, you've got to be screwed. If you take a kind of larger step backwards and look at the whole stock market, you have all these different traders doing strategies, and there's some that look like geniuses because they happen to be doing a kind of trade that happens to work really well in this kind of market, the market flips, they might do really badly. But for, let's say, five years, they just seem like geniuses, and everyone wants to invest with them.

SATVIK: That's definitely one of the scary things about the stock market is just how long these sorts of conditions can go on.

SPENCER: Right, even when something seems totally irrational in the markets, the problem that all the sort of super smart investors have is they're like, "Yeah, but I've seen situations where the market did crazy things for 10 years, and I can't sustain a trade that's against every 10 years."

SATVIK: You could argue that we're in one of those regimes right now, where the market has been in a fairly specific type of behavior since 2009.

SPENCER: Which behavior is that?

SATVIK: The market has been pretty consistently going up with historically very low volatility.

SPENCER: [laughs] Lots of people have said, "Oh, well, you know, we're due for a crash, things got to go really badly." And then, it just keeps going up, other than, of course, the temporary crash for the pandemic, but that was fairly short lived.

SATVIK: It's easy for me to imagine being a hedge fund analyst who graduated in 2009, and has just been learning in this environment, and then maybe this environment will continue for another 10 years, and that's great. But maybe things will mean revert, then suddenly, 10 years of experience are suddenly invalidated, and you have to unlearn all your habits.

SPENCER: There's really interesting phenomena happening in the markets where you could have a market condition that lasts so long that a lot of the people in the industry have only ever seen a market with that condition, and instead of thinking of it as a market condition, they think of it as "Oh, this is the way the market is, the nature of the market." We saw this with housing prices, for so long, just went up and up and up — people just sort of forgot that that's not always true, they just thought, "Oh, that is the nature of housing. It just goes up in price." Some people say that we're kind of in that similar situation with bonds where people are just getting used to interest rates being so low that they just think, "Oh, yeah, this is just the way things are. Interest rates are low." And often that can actually lead to sort of devastating consequences because you couldn't imagine the first time you see something like this happen, maybe you're kind of very aware of it. But after it's been happening for 10 or 20 years, it's like the water you're swimming in, and you stop even considering how it would affect your trades if it were to change. Then you can get massive problems like the huge financial collapse when the mortgage industry went down.


SATVIK: It's funny — the idea you mentioned of the water you're swimming in, also applies the other way with problems that people come to accept after they've been around for long periods of time. I remember even at one job, we had retrospectives every week — where we were supposed to bring up long recurring problems that we'd like to solve, and we pretty much just forgot all of them, within a week or two. They just became part of our environment, and we would only notice them again when we hired somebody new. And they started noticing all these problems.

SPENCER: I think this happens in our personal lives as well, where if we've lived with some kind of problem for, let's say, five years, we get so used to it. We stop really thinking about it or processing it. I think that can be one of the benefits sometimes of a third party or friend or a coach or a therapist, which is just kind of being like, "Hey, you know, the way that you and your wife fight every day? Like, that's not ideal. Maybe we should work on that." One question I think that can be really useful to people is to say, "Okay, what are your top three biggest problems right now in your life? Write them down. And are you sure that you shouldn't be working on them right now because it seems like you're not putting much time into them, even though they are the things you think are your biggest problems?"

SATVIK: I have a very direct example of this. As a child, I had really severe asthma that greatly restricted my activities. I couldn't go to a lot of places and I needed to carry around a giant nebulizer everywhere I went just in case something went wrong. As I grew up, I mostly got over it. And so I thought of myself as somebody who had childhood asthma but got out of it. Actually, when I got married — when I met Mandy — she pointed out that I have a lot of breathing problems and convinced me to see a doctor, and she said, "You have asthma, and you should be treating it." Even though it's not nearly as bad as my childhood asthma was, treating asthma now has significantly improved my breathing and the quality of my life.

SPENCER: So you just got used to breathing badly and just thought, "Oh, this is just normal. This is how I breathe."

SATVIK: I mean, I was breathing better than I ever was, because my asthma got better over time. It went from severe asthma to moderate asthma. And so I thought, well, moderate is great now.

SPENCER: Interesting. That's an example where you sort of didn't even know anything else. That improvement looks like "Oh, I'm doing great. Now, why would I treat it?" I think a lot of times it goes the other direction, where it's like a creeping badness where things get slowly worse and worse and worse, and then now they're just at some bad place, but you're so used to it that you're just kind of not actively thinking about it anymore. Like, relationships slowly falls apart, or in some cases, it's even physical — like someone's house is kind of slowly falling apart [laughs] but it's so slow that they never really pay that much attention to it.

SATVIK: Right, or another very common case is if your house is slowly gathering mold.

SPENCER: Yes, mold is a scary one. So speaking of problems in life, another question I want to ask you is whether there are two different major approaches to solving problems: forward chaining and backward chaining. Do you want to tell us a bit about that?

SATVIK: My background is in math, and I am very used to studying something deeply, really understanding it, and then solving a problem using that understanding. When I came to the world of software engineering, that didn't work anymore. I didn't have the time to study everything deeply, and had to learn to use a lot of things that I didn't understand. That got me thinking about a different mode of thinking. What I noticed, I had a really hard time just picking up a new tool and building something with it, but a lot of other people had an easy time with it. And conversely, there were things that I could do very well, where I could predict that something I understood deeply would perform a certain way, or it could be used to solve a large class of problems. I studied what those people were doing, and I asked if I could watch them. And by far, the most common method that people would apply here is they would go on Google, search for an example of something that kind of did what they wanted to do, and then take that example and start modifying it. That's backward chaining from an example, where you take something concrete, and instead of trying to understand every bit of what makes it what it is, you start poking around with it and kind of seeing the empirical behavior it has.

SPENCER: What makes it backward chaining? Is it because you're starting with something that's kind of something you want and then trying backwards, adapting it to what you need, is that what they do?

SATVIK: Yes, you're starting with something concrete that's just not quite what you want and then taking a step back, in order to give yourself the freedom to make it what you want.

SPENCER: I guess I think of another type of backward chaining, when it comes to, say, a life goal, where I would think of backward chaining as like, "Okay, you think about the place you're trying to get to. When I'm 50 years old, I want to have two children, and I want to live in a house that I own, etc." Then you kind of try to then work backwards from that concrete state, you're imagining where you are now to kind of make your life plan. Is that the same idea? Or is that a different idea?

SATVIK: It's absolutely the same idea. The reason I think of them as so similar is because, say you have your ideal end state as point A, and then you have your achievable thing — maybe it's an example of software that's already created, or maybe it's a job you know how to get that's not the ideal job you want — and that's point B. What you want to do is move backwards from both of them to find a common ancestor — that decision point that separates point A from point B — and then you can move from point B to that common ancestor, and then make your move towards the goal you actually want.

SPENCER: Interesting. Can you maybe give a concrete example of that?

SATVIK: Sure. In software, one thing I was trying to do at one point was to see if, essentially, I could take a Python script and run a slightly modified version of it on hundreds of different machines (this was something I had no idea how to do). I googled around a bit and found a product — Amazon Web Services Batch which offered to do this, and it had a specific example in a tutorial — so now my previous approach of just forward chaining all the time would have been to try and really study exactly how Batch works and develop my configuration, my files, my logic from the ground up. Instead, I took a specific Batch project, made sure it worked, and then deleted all of the stuff that was specific to it that was different from mine so that I could now start inserting the bits and pieces that I actually wanted. So that point of deleting all the specifics is finding the common parent.

SPENCER: Nice, that makes sense. I think someone could be left with the impression that forward chaining is kind of worse, but it seems there are definitely cases where forward chaining is better than backward chaining. Do you want to talk about that?

SATVIK: I may have given that impression, because I'm so used to forward chaining, but forward chaining is often better for coming up with new ideas or for coming up with cases where you want to accomplish a lot of things at once. I would say that my example of coming up with risk driven development, of taking a data engineering project, and identifying that the most likely risk was actually the libraries and not the user interface, I would say that's forward chaining.

SPENCER: Got it. Maybe you could talk us through how you do forward chaining or what's a good procedure for doing forward chaining? Maybe walked it walk us through the process.

SATVIK: One of my goals when I moved to Boston — and I was single and living by myself — was that I wanted to make friends. I looked through my life like, "Well, what had built successful friendships?" I came up with sort of the main ingredient that was really missing from my current social interactions was just quantity of time. I just wasn't spending enough time around the same people repeatedly to have the chance to build up friendships. So I thought, "Well, what's something that would let me do that?" And my answer there was I started hosting a weekly board game night at my house.

SPENCER: Got it. The way you apply forward chaining, in this case, is you kind of look at the situation, you identify what needs to happen or what needs to be different, and then you build a plan from the ground up. Whereas, if you're backward chaining, maybe you do something like, "Oh, let me find a friend who seems to have a good social life. What are they doing? Okay, can I just tweak their solution to this problem and make it my own." Is that right?

SATVIK: Exactly. That's a perfect example of the difference.

SPENCER: Cool. I think startups are a really interesting example of the power of both forward and backward chaining. Because, if you think about the lean startup methodology for building a startup, it says, "Pick some problem in your own life or that you witness right in front of you, start building a prototype of some product or service that could help solve that problem, get it out in the world right away." So you're sure you're looking at what's in front of you, you're immediately trying to apply a solution to it, and then you're iterating on that, right? It's like, "Oh, well, people didn't like the user interface. I'm going to tweak that," and "Oh, people say that was useful in this way, but not that way. So I'm going to follow that gradient." How would you say that fits in this forward-backward chaining framework?

SATVIK: I think the reality is, you're never just forward chaining, and you're never just backward chaining. What you have is this connected graph of ideas, some of which you're pretty certain of, you have a good mental model of, and some of it you don't really understand at all. The way I think about it, in the context of a startup, is that I'm trying to find these points on the graph, where we need to learn the most. In some cases, that's learning about what the customers want — as we've discussed — in some cases, that's learning about the technology, like curing all cancer (we know customers want that). In software, what you'll often start with is, in the startup, you'll say, "Okay, I think that the thing that will give us the most information about whether customers want this is actually making the user interface faster. Because right now, our last trial with customers, they just found that it was really slow and all of their complaints were about the user interface, and so we didn't even really get a sense of whether they care about the problem our product is trying to solve." So that's one point. Then you want to ask the question, "How can you make it faster?" And you can either forward chain from your understanding of your codebase, and say, "Look, I'm pretty sure that these specific things are the bottlenecks in our code, and we should fix them," or a sort of more empirical approach would be to start profiling your code in those use cases and work backwards from there.

SPENCER: So profiling would be running software that could analyze how much time to refresh your code takes to run, right?


SPENCER: I feel like this topic is connected to another topic, and I'm still having trouble mapping them together. That other topic is local optimization versus global optimization, the way that I think about this is to imagine that you're climbing a series of mountain ranges and your goal is to get as high up as possible. Local optimization is saying, "Okay, look around where I'm standing right now, what one step could I take that will get me up as much as possible in that one step? Let me take that one step, and then I can repeat that process, I can look around again and say what one step I now take that will get me up the hill as fast as possible." If you keep repeating this procedure — this gradient ascent algorithm as it's called — you'll eventually get to the top of the current hill that you're standing on. But you won't necessarily get to the highest peak or the highest hill, which might require a global optimization approach, where you have to think about, "Oh, what is the map of all the different peaks, and I might have to actually walk down for a while to get to a different hill, then I can start climbing." And so local optimization is just saying "One foot in front of another to go up the fastest possible right now." Global optimization is looking at the whole thing in a bird's eye view and trying to figure out where you're getting to. Did you see a connection between those ideas and the forward and backward chaining thing?

SATVIK: I would certainly say that backward chaining is closer to local optimization, and forward chaining is closer to global optimization. Usually, when you're backward chaining, you're taking a specific start point. Whereas forward chaining, you're taking more of a blank slate and applying any technique you have to sort of search a wider landscape.

SPENCER: One thing that comes up in local versus global optimization is that local optimization is almost always easier than global optimization, because it's almost always easier to just look around where you are right now and say, "What little steps can I take to make things better or make things go up higher?" Whereas, global optimization requires thinking about the whole space, making a plan. And I wonder if that applies also in the forward and backward chaining — then maybe the backward chaining is often easier, but maybe has the danger that you're only limited to what good example you can find or what thing you can start out off and then modify. Forward chaining may be more challenging on average, but maybe it can solve a broader range of problems. What do you think about that?

SATVIK: I think that's right, backwards chaining is almost always easier and faster, and almost always comes up with a solution that's good enough, but maybe not necessarily the best. An example would be the other day, I was trying to make a plot, a moderately complex one to represent some data that I had, and I don't know Excel particularly well. But I was able to search and find a plot that was kind of similar to what I wanted and tweak it around until it was good enough. It wasn't quite perfect. The colors were a bit off, the text was a bit too small, but it was good enough for my use case. Whereas, if I had spent some time studying Excel deeply, I might have been able to make something better, but it would have been much more of an investment.

SPENCER: That makes sense. It's funny you bring up plotting, because I find very often when you're plotting information, the perfect way to plot is actually quite difficult to do, and it's not necessarily any of the standard plot types. If you go through the standard, "Oh, do I want a bar chart or a scatter plot?" What you actually need is none of those, but it's actually a ton of work to design the right plot, and so, most of the time, you're better off just finding the standard types, which one's closest to where you're trying to get to.

SATVIK: That's actually something I ran into in my product management period. I realized that when you're presenting information, what you really care about is communicating the unusual parts — the parts that people aren't likely to already understand. The challenge with using any out of the box plots and visualizations for that is that those are almost by definition designed to communicate the sort of usual things. I really ran into this issue quite frequently, where if I was trying to explain information in the best way, I would have to spend a lot of time designing essentially something totally brand new.

SPENCER: That's really interesting. I have had so many hours invested in trying to design the perfect plot for different information. One thing that you will find if you look at infographics that look really cool (where there's some really interesting visualization of some data) is it very often the most interesting ones that visually, they don't actually present the information very well. They just make them look really cool. But if you actually say, "Is this actually highlighting the parts of the information that are most important?" It's often not the case.

SATVIK: It actually brings up an interesting incentive problem. Do you present the plots that are going to make people like the plots most? Or do you present the ones that are going to improve their understanding the most? And those two aren't necessarily the same.

SPENCER: A great example of this is people often use pie charts because they kind of look slicker and cooler. Whereas, very often, bar charts are actually a much better representation of the same information, but they look really boring.

SATVIK: Yeah, internally in our company, we actually use literally the exact same type of line chart for almost everything, and we deliberately made it kind of ugly, just so that people don't spend too much time designing these visualizations.


SPENCER: The last topic I wanted to bring up with you is the idea of centralization versus decentralization and sort of what the advantages and disadvantages of each are, and when we should use each.

SATVIK: What brought this to mind was two companies I worked at. At one company, everything was way too decentralized. They supposedly had a software product, but they did a lot of custom-consulting for each of their customers and everything was totally different. And at another company, things were way too centralized. They had a single support team despite having many different customers with many different needs, who had all initially interacted with different salespeople or engineers in the company when they joined. The very centralized support team couldn't really effectively handle all the customers. The twist, of course, is that these were the same company. They were overly centralized in one way, and overly decentralized in another way. A lot of this was very ideological, rather than people discussing what to centralize and where people were mostly arguing about whether the company should be more or less decentralized.

SPENCER: I see. So instead of thinking of it as a flexible strategy, where, "Oh, we could be more centralized in this dimension, and less centralized in that dimension," they were thinking about as sort of like, "The whole company has to have a certain level of decentralization", even though that wasn't actually true at the time.

SATVIK: Right. It got me thinking very clearly what we wanted in this case was either for customer requests to be more centralized and less customized, or for the support team to be more decentralized, so that they could handle the variation. I've seen this pattern quite a bit where — especially a lot of people in startups have been at very restrictive employers, who were excessively centralized in many ways — they just want to decentralize absolutely everything.

SPENCER: Sort of reactive because they've been hampered by bureaucracy and they can have every negative reaction to it.

SATVIK: Exactly. I think there's one book by Alfred Sloan, who was the head of General Motors, during the time when General Motors went from being basically a tiny company to a major one. It's called "My Years At General Motors" and he has a very nuanced take on centralization and decentralization. He talks about how very frequently, when you want to be responsive to a market, when you want a lot of customization, you don't know what customers want. That's when decentralization is helpful. But when you want to control costs, when you want to do the same things over and over efficiently, that's when centralization is helpful. He would actually take General Motors through roughly five-year periods of decentralizing a lot. And then re-centralizing a lot.

SPENCER: Because the idea is when you decentralize, you allow information to flow in from all the different parts of the organization like, "Well, maybe the people in Kentucky know more about the needs of customers in Kentucky" or something like that. And then when you go through the kind of centralization phase, then you're integrating all those learnings in some way.

SATVIK: Yes, a decentralized team doesn't just not know more about Kentucky, but if Kentucky is very different from Ohio, then the team situated there can learn about their Kentucky customers much faster. In contrast, if you have five different teams in the Midwest, and your customers have a lot of commonalities, odds are that each of those teams is going to be reinventing the wheel a lot. So the ideal is if you can get the decentralization in order to learn about the true differences between customers and centralize the parts where they're mostly the same or where the differences don't really matter.

SPENCER: It seems then that you want to use more decentralized solutions when different groups or different areas or regions or whatever actually need different strategies, but then you want to centralize the commonality of the pieces that they all use in order to improve standardization and maybe even incorporate the learnings from all the different groups where they overlap.

SATVIK: Yeah. One example from Twitter was, originally they started out with an extremely decentralized organization, where each software team was building their own tiny product. In addition, each team was also doing a lot of common engineering work — things like setting up a code repository, things like setting up a test framework, where every team was more or less doing the same thing perhaps in slightly different ways. But where those differences didn't matter, and the solution that they arrived at first, they actually went through a period of centralizing too much. They tried to centralize all the teams and suffered from that. But then the end solution was to just decentralize the customer facing parts and centralize a lot of the infrastructure.

SPENCER: So you said to me that centralization provides leverage and decentralization provides responsiveness, do you want to unpack that a bit?

SATVIK: By leverage, I mean a few things. If you centralize something, you can get good costs, you can reduce costs, and you can often make a better solution apply more broadly. Let's say you have a range of solutions — some are good, some are bad. If you centralize, you can often get sort of the 80th percentile, 90th percentile solution spread everywhere. You might not be able to get everybody using the absolute best thing, but you can get everybody using a pretty good thing.

SPENCER: That makes a lot of sense.

SATVIK: On the other hand, the downside of centralization is you have fewer brains — you have fewer people making the decisions. They might be able to develop more expertise, but they can't take in as much information. If you want to respond to customers, or if you want to open something in a new market, if you want code that people can experiment with a lot, those are all cases where you want to decentralize.

SPENCER: It seems also with decentralization, you can have a strategy where it's like, "Well, each group can do its own thing. Some of them might do badly, but some of them might do well." And you can actually capitalize on the variation. Because then you can say, "Ah the ones doing well, we can give them more resources. Or maybe we can copy some of the good things they are doing. The ones that do badly, we give them lots of resources or shut down those regions." Whereas in the centralization procedure, you're kind of putting all your eggs in one basket, in a sense.

SATVIK: Yes, and this is a major point of concern, actually, for a lot of companies such as hedge-funds. A lot of hedge-funds are structured essentially as independent collections of small companies where each team has some amount of money, and they're essentially responsible for all the trading and their own profits. There are some companies like Renaissance or AQR mostly quant funds, where it's structured more as one team, people are more specialized, they have roles within the team. As a result, it's harder to directly measure how well any group of five people is doing.

SPENCER: But then they can benefit from, say, having risk controls across all the strategies simultaneously, all the different teams benefiting from each other's work, rather than a competitive atmosphere where each team is trying to do better than the others.

SATVIK: Right. This is also a difference, I believe, between Google and Amazon. Amazon has a sort of much more decentralized, many small teams structure where it's also easier to measure the value of each team. Whereas companies like Google tend to have more centralized, more specialized responsibilities.

SPENCER: Yes, you're talking about kind of core Google rather than the Alphabet, which is now kind of split into all these different pieces?

SATVIK: Yes. Which is, in itself, an example of decentralizing?

SPENCER: Right. Exactly. Let's talk about this at a bigger scale, like at the level of society or a country, how would you think about applying it there?

SATVIK: One thing I find really interesting is to look at laws between a place like Japan, which has a pretty strong federal government, with relatively less local variation and laws, versus the United States where your different states are really different. One thing that I think the US actually does pretty poorly as a result is being business-friendly. I think the US, in many ways, is less business-friendly because of this fragmentation.

SPENCER: I see, because you have to be incorporated in one particular state, and then if you want to hire employees in a different state, you have to register with the state government, all this kind of thing.

SATVIK: And there are many cases where you have many versions of local laws and compliance that are just slightly different in each state, so you have to prove this compliance 50 times.

SPENCER: That's interesting, because the US overall seems to have a reputation of being very business friendly (when you aggregate all these things), but I think you're absolutely right that having all these different states does make law more complicated. For example, let's say you're trying to create a company that does digital therapy, where you have therapists that do Zoom calls with patients. It turns out the licensing requirements are different by every state, so you can't have a therapist in one state be having a Zoom call with a person in another state, because of all those different state laws and such.

SATVIK: Yeah. I think — what's the term for that occupational licensing is a major problem in the US — it applies in many, many fields. For example, my wife's a teacher. It would be not impossible, but a significant amount of work for her to get a teaching license in another state.

SPENCER: It seems a little myopic, because those kinds of rules benefit teachers in particular states, because then it's harder for other teachers to steal their jobs. At the same time, it screws over those teachers, because if they want to move to a new state, there's all these barriers thrown up in front of them to do so. It's sort of hard to argue that, "Oh, well, are the state's rules really that different that they shouldn't just accept each other as good enough for admitting someone?" If someone's a social worker in one state, really, we're not gonna let them practice social work in an adjacent state? It seems pretty weird.

SATVIK: Another domain where this is coming up recently is privacy and data compliance. For example, you have one state where the requirement might be that a company either encrypts its data with one method or gets rid of it within 30 days. I think we actually briefly had a situation where another state required you to encrypt with a different method and these were incompatible. Thankfully, I believe that specific situation has been resolved. But there wasn't even any sort of potential malice, I think — just two different states were trying to accomplish the same thing in slightly different ways and ended up with an impossible situation as a result.

SPENCER: Stepping back and thinking about the country as a whole again, and this idea of centralization and decentralization, usually people talk about this as kind of a spectrum and where do we want to fall on it. Libertarians are like, "Oh, decentralize everything." And maybe a lot of liberals are like, "No, we want more centralization." But taking your idea that you discussed earlier in the conversation, it really seems like we could decentralize parts of society and centralize others and actually might have more benefits. So thinking in that way, what are some things you think we might want to centralize versus decentralize?

SATVIK: Occupational licensing and a lot of business regulations, I think, are something I would want to see more centralized. I think one interesting place where I would like to see less centralization is drug approvals. I think there are a lot of people with very different opinions on what is the right amount of safety for a regulatory agency, and what is the right amount of proof to demand. I think that's a case where having more models, maybe having different states experiment and directly see the results of, say, laxer versus stricter drug approvals would be very useful.

SPENCER: You can see the argument for centralizing that because maybe it's just such a difficult task to decide how safe the drug is, that it's hard for every state to do it. On the other hand, you could imagine a situation where there's only one FDA still, so that's centralized, but different states can decide how they want to handle the FDA's recommendations, like maybe once they could say, "Well, you just can't get anything not FDA-approved." And other states could say, "Well, if it's not FDA-approved, you can't buy it yourself, but a doctor can still give it to you." And then a third state could say, "Well, if it's not FDA-approved, you can actually buy yourself, but it's going to have a huge label over it saying this might kill you or use at your own risk." Those kinds of three models could all compete, and maybe that would give us more information about how well each one works.

SATVIK: Yeah, I think that would be very effective.

SPENCER: It is an interesting idea that we have all these experimental test beds for kind of running trials in society, having so many different states. I wish that we experimented with it more. It would be super interesting if some rules that would get rolled out to states would actually get rolled out in a randomized order, so that every month a new state adopted or something — it would be almost a randomized controlled trial — you could actually see how the effects propagate as the new rules rolled out.

SATVIK: It is interesting, and you definitely see parts of that just through state culture, for example, with the pandemic. Different states had significantly different rates of wearing masks. I think we saw some interesting variation there and how well different policies were.

SPENCER: Unfortunately, it can be a fairly costly way to get information. But yeah, I do wonder, to what extent has this kind of state model been part of the reason why the US has been so successful economically. Does it provide a sort of way to decentralize the experiment? That's actually really powerful.

SATVIK: I think one of the things that's been interesting about states is that a lot of states and a lot of cities have offered specific incentives to types of businesses to move in at different times. For example, I remember around 2010, Washington DC had a big program for startups and tech businesses, and Philadelphia did more recently. I think places like Kansas, which have managed to get a sort of much faster internet than most of the US, have also been offering incentives for people to move there.

SPENCER: It seems like though it could also create a weird zero sum competition, where states are kind of battling each other for who can give them a tax break, and there's a zero sum element, because it's like, "Well, that activity has to happen somewhere."

SATVIK: I think it's actually very interesting to try and figure out whether that's a positive or negative — I've thought about it a bit, and I'm genuinely unsure. One of the benefits of states and cities — like battling out with tax breaks and incentives — is that assuming the States know what they're doing, the businesses are more likely to end up in the places where they're the most productive. It would be more helpful to have a bit more of a centralized framework for that, say, the state submits an estimate of how much additional value they think that business will bring, and then, maybe offer a tax break as a percentage of that value, where that percentage is fixed across all states. I think for that sort of incentive, it's not actually really clear to me that it's a net negative.

SPENCER: Interesting. The assumption there is that it's not a zero sum. In fact, where these companies go, or where this activity is done, actually can produce different amounts of value, depending on the state. Therefore this is an information gathering exercise where the state willing to bid the most is the state that probably will benefit the most. It's a way of allocating resources efficiently.

SATVIK: Exactly. I think you do see this, and certainly on smaller levels, like if somebody is interested in software, then there are some states that are much better for their careers and much worse.

SPENCER: Pulling back to the kind of country level, it seems if we think about what you were saying before — applying centralization and decentralization strategically to a whole country this time — in terms of centralizing, we can centralize things that we know are good ideas — maybe they were learning through experimentation. But now once we figured out they're good, we can centralize them and apply these best practices to the whole country. On the other hand, maybe we can decentralize things where we don't know the right answer, so we kind of want to let lots of different strategies play out to learn more information and to see which ones work. That's one way of thinking about when decentralized versus centralized at a whole country level is, like, "How confident are you in what the right answer is?" "Do you want to try a lot of things or do you want to apply best practice?" And then another one is sort of, "How much information does the top level decision maker have versus the low level person on the ground?" because if we have a situation where maybe the people running the country actually have by far more information about that topic that can be used to decide than the people on the ground, then you might want to centralize it. Imagine you're in a situation where maybe only the company that is operating locally knows what the right thing to do is, well, then you'd want to decentralize. Those are two big factors that can help you decide when to centralize or decentralize. Do any more of those come to mind?

SATVIK: One more that might make you want to decentralize over time is when something becomes a lot easier or cheaper. An example of that in the modern day would be information collecting. I think, right now, it's actually relatively difficult for states to do most — like many kinds of surveys. A lot of them have to go through a census board or through various regulations, but it would be, for example, a good idea to give states more authority to conduct surveys and censuses.

SPENCER: You shouldn't use it back in the day when it was so hard to do a census that made sense to centralize it, but now it's relatively easy to collect this kind of information, we should let the states handle their own.

SATVIK: Exactly.

SPENCER: Well, that seems to suggest actually a fourth kind of access has to decide this, which is how much standardization you want. Because you could imagine, on the one hand, you might not need standardization at all, and then you kind of decentralized situation can work well, or on the other hand, you might want a lot of standardization. If you're doing a full US census, if the question is asked in the state that actually is going to make the information very hard to compare, that seems like another reason you might prefer centralization or decentralization.

SATVIK: In this case, I was thinking more about decentralizing the execution. I would say even pretty generally, you often want to centralize protocols, you often want to centralize a broad, common set of things, and then you want to leave some room for decentralized variation.

SPENCER: It's interesting to me, because a lot of times this debate of centralization versus decentralization almost takes on a moral element. This decentralized view, says, "Well, people should be able to make their own decisions. States should be able to make their own decisions." and so on. Whereas, the kind of centralized view says, "No, we've got to look after everyone." This discussion has been extremely pragmatic. It's saying, "Well, centralization and decentralization actually just have cost and benefits." If you consider those, then you don't want to be all the way at one end or the other but you also don't want to have the same level of centralization and decentralization for every aspect of a society or of a company. You need to think about on a case by case basis, each aspect and say "Where do we want to fall on that spectrum?"

SATVIK: Exactly. I will say that very frequently the most helpful question, considering it is to look at one thing, sort of one activity, and ask, "Do I think this would be better if it were one level more centralized or one level decentralized?"

SPENCER: That just seems like a general strategy for thinking that's very useful — which is, it's very often extremely difficult to figure out the optimal amount of a thing. It's much easier just to say, "Okay, I don't know the optimal amount but is the current amount too high or too low, and I can just begin to nudge it in whichever direction I'm getting to the biggest marginal improvement."


SPENCER: Satvik, this was really fun. Thanks so much for coming on.

SATVIK: Thank you for inviting me.





Click here to return to the list of all episodes.


Sign up to receive one helpful idea and one brand-new podcast episode each week!

Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:

Or connect with us on social media: