Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
December 29, 2022
Why do organizations get slower as they grow? What can organizations learn from slime molds? What are the advantages of top-down organization versus bottom-up organization, and vice versa? How can organizations encourage serendipity? What use are doorbells in jungles? Why is it so hard for organizations to set a "north star" that is at once plausible, coherent, and good?
Alex Komoroske has over a decade of experience in the tech industry as a product manager focusing on platform- and ecosystem-shaped problems. While at Google, he worked on Chrome's Web Platform PM team, Augmented Reality in Google Maps, and Ambient Computing. He's fascinated by how to navigate the emergent complexity within organizations to achieve great results. You can find some of his public writing at komoroske.com.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today. In this episode, Spencer speaks with Alex Komoroske about why organizations get slower, strategic empowerment, and optimizing for collaboration.
SPENCER: I'm pleased to tell you that today's episode is sponsored by GiveDirectly. GiveDirectly is a global nonprofit that lets you send money directly to people living in extreme poverty with no strings attached. One of the things I find really cool about the GiveDirectly website is they have this section called recipients, where you can actually see a constantly updated feed of who's getting the money that they're giving, and what those people are doing with it. It's really fascinating to see. Just to give you a couple of examples, Paul in Rwanda used his cash to buy a cow, upgrade his family's home and launch his bicycle shop. Eunice in Kenya used her money to cover food costs, pay her children's school fees, and keep her tailoring business afloat during the pandemic. It's really neat to see all these different use cases of people living in extreme poverty who are getting relatively small sums of money, but they're making a really big difference in these people's lives. So if you'd like to learn more, or you'd like to send money directly to someone living in extreme poverty to spend on what they need most, visit givedirectly.org/thinking. That's givedirectly.org/thinking.
SPENCER: Alex, welcome.
ALEX: Thanks for having me.
SPENCER: I think a lot of people recognize that as organizations get larger and larger, they also get slower and slower, and it becomes hard for them to accomplish things that even a small startup is able to do. So, I want to start today talking to you about that question. I know you have a lot of interesting thoughts on that. So, why fundamentally do organizations get slower? And then, we'll go into talking about what the right solutions are.
ALEX: Great. I think this is one of those problems that shows up. People assume very often, like, “Oh, it must be somebody who gets lazy, or there's some villain.” It happens even if you assume absolutely everybody is very good at what they do, very hardworking, and very collaborative. It arises for almost a fundamental-like reason, due to the shape of networks and the way that people have to coordinate across many different priorities they could be working on, that has a very fundamental sort of super linear acceleration. I think what's hardest about it, honestly, is it shows up just because there's inherent uncertainty. There's different information that different people have in it, and the conditions are changing pretty often. That makes it extremely hard to coordinate. The amount of effort it takes — the overhead it takes to coordinate — can become a huge portion of the underlying effort. And that just makes everything significantly harder to do.
SPENCER: Okay, so let's compare a startup to a large organization. Is the idea that in a small startup — let's say there are only two people working on it — all the information is already in their heads about what they need to do? Whereas in a larger organization, maybe 10 or 15 or even more, people have to coordinate in some way in order to get the task done.
ALEX: Yeah. There's both the amount of people that you already share the same information, or share contacts or trust. So they can share information they have to see a full picture of it. It's also just something where you have to communicate with lots of different people across the team. It goes up with the square of the number of people in that team. As teams get larger and larger and larger, it just goes up at a superlinear rate. It gets worse when you're working with lots of different people across many different teams. So, if you sometimes work with a team of 15 people, and then you also work with this other team of 40 people, that makes it even harder for this effect.
SPENCER: Let's talk about the ‘n2 rule' for a minute. I think where that comes from is, if you have n people, and they all have to communicate with each other, then the amount of communication has to happen something like n2 conversations. I think more exactly, it's more like n times n minus one divided by two or something like that, roughly n2 conversations have to happen. Is that where you think about this n2 coordination law coming from?
ALEX: I think it is the fundamental reason it shows up. Obviously, in many cases, you don't actually have to have literally n2 interactions between people. If you have a leader, whoever looks to and speaks at a meeting with lots of people, then lots of people will hear that utterance. And so, it doesn't require that person to have ‘n' different conversations with people. I think in practice, though, it's a reasonable proxy for the upper bound. I think it's actually much more like that than people generally assume in large organizations because often, tons of extra information, tons of extra hidden constraints, hidden other things that people have to communicate and share about that a broadcast medium doesn't work particularly well for.
SPENCER: Right. If you could bring everyone together, let's say once a week, and coordinate all the common information people need to know, then you'd only need that one conversation to spread the information. But you're saying, in practice, that's not necessarily going to happen with all the relevant information. Maybe it'd be too much information even to have in one meeting, and it'd be wasteful. So instead, you have a whole bunch of these pairwise conversations going on?
ALEX: Exactly. In practice, it is. Information is even richer than people give it credit for because there's lots of things that people feel like, “Ah, I probably shouldn't bring this up.” Or “I'm not sure if this other sub-team is really up to the task,” or “This thing that the leader thinks is really easy, I think it's actually gonna be really hard. I don't want to undermine them in the meeting and in front of other people.” So in practice, there's often significantly more hidden information, hidden constraints, that people actually aren't talking about. Even if they're having one-on-ones, it's possible to have to share them if you're having very authentic high-trust one-on-ones. But it's really, really challenging to get all that information, especially in a fast-moving broad team.
SPENCER: Does it become a bigger problem at larger organizations?
ALEX: I think so. My experience is primarily at large organizations. I was at Google for 13 years, and I think it is true at larger organizations. The more broadly the surface area of the entire organization — like strategic things they might plausibly be doing — seems to increase the rate of how often this happens because it just means that there's a wider field of possible orientations that people might be pointing toward at the beginning, that you have to find a coordination point that everybody can agree is reasonable. So I think it generally is harder in large organizations, especially when you have to work with the infrastructure team and the legal team. You got to work with the marketing team, which is balancing marketing launches across 40 different teams right now and trying to figure out how they all fit together. I think people will often have this mindset of what I guess would be called a complicated problem. You can break it into sub-components, solve them independently, and then reassemble them into a solution that is the correct solution for the whole. In practice, you've got this kind of bustling, interrelated, the whole ‘being larger than the sum of its parts' kind of complex problem across trying to navigate and balance all these different constraints and the movements of all the different teams. And I think it's way, way harder than people give it credit for superficially.
SPENCER: Got it. Besides this n2 issue, what are some of the other issues you point to in larger organizations?
ALEX: I think a lot of it comes down to trust. It comes down to this notion of, I think that trust has a much larger role than people give it credit for. One thing that shows up quite a bit is this fundamental attribution error. Have you ever heard of that?
SPENCER: Yeah. Do you want to explain it to our audience?
ALEX: Yeah. So the fundamental attribution error is this notion, the example I always give is when I'm late to meet a friend for coffee, it's because of traffic. The systemic situation is outside of my control. When my friend Sarah is late for coffee with me because she doesn't value other people's time, and she's always late. We tend to blame systemic forces for our own failures and intrinsic qualities for other people's failures. Part of it is that we can see when we are situated inside of ourselves, we can look around and see all of the myriad constraints that we're under. But you can't see those as easily as other people. So, you have to actively think about all of the myriad constraints that other people are under too. And when you don't do that, it kind of naturally creates this lack of trust and scenario that can spiral as things get way more busy.
SPENCER: One comment on the fundamental attribution error. There's a way in which, insofar as it's true, it's an irrational tendency, right? Because it's judging other people differently than we judge ourselves in a way that can't necessarily be justified. But a lot of times when we're dealing with other people, we have way less information about them. So, imagine that the only thing you knew about a person was that they just cut you off in the road. Literally, it's your first interaction with them. It seems a much more reasonable inference to like, “Well, I have one piece of information. It's bad, they're probably more likely to be a jerk.” Whereas, with ourselves, we have lots and lots and lots of pieces of information. So, if we cut them off in the road, we know that “Oh, we have a lot of other traits as well.” So, I wonder if that's also part of what's driving that?
ALEX: Absolutely. I think if you've only ever gonna have a one-off interaction with somebody — you have no other priors about that particular person — then you don't really know if you're gonna interact with them again. You don't really know what's up. But when you have someone that you're working with, and you expect to work with for many months or years, as a teammate — you've earned trust, you've seen them at off-site and dinners, and you've heard about their family, and you see how they've succeeded and done various things that are really, really great — that gives you a lot more trust to do. Whereas, someone cuts you off in traffic. I have no idea. I think the important part, though, is I agree that it is all else equal. It is reasonable. The only interaction I have with this person is them being a selfish jerk. I find that a default useful perspective to have is, it's so easy to jump to “that person is a jerk,” and that will prevent you from learning from their perspective to some degree. Now, if someone cuts you off in traffic, you're gonna learn that person's perspective because you're never gonna talk to him again. But somebody that you work with, or that you might have repeated interactions with, is trying to default to prospective compassion and assuming that they are acting in good faith and doing a good job, and that they are there under constraints that I can't fully sense, in trying to be curious about what those constraints might be. Because often, once you find the constraints, the right answer just pops right out, once you realize the actual constraints people were under.
SPENCER: I think this points to something interesting difference between large companies and small companies. At small companies, you'll probably have many repeated interactions with everyone at the company. With large companies, there'll be many people that you might occasionally have to interact with, where you haven't built up a long track record and seen their history. And I wonder if that also is part of this trust issue that you mentioned.
ALEX: That totally tracks as for every interaction you're having in a given day, what's the chance that it is with somebody that you hadn't worked with before, or haven't had a deep interaction with at a large company? Especially one with lots of interdependencies and lots of different products that all have to fit together into some kind of cohesive whole. The likelihood is inherently higher, that you're going to be talking with somebody you haven't met before. Although I also find that in a lot of these larger organizations, and definitely in my own experience, you're going to work with the same people in different contexts again, so my rule of thumb is don't be a jerk to anybody. Assume good faith, assume that even if you're working with this person now and it's a bit challenging, you might work with them again in the future. So, make sure you don't treat anybody poorly, or treat them in a kind of extractive way or something because it's a nice thing to do. Also, you're probably going to work with it again at some point [chuckles], there's a non-trivial chance. Or what also happens, I found in large organizations, large organizations really run on gossip. This is not like, “Oh, gossip is bad.'' No, gossip is a really efficient information transfer mechanism behind the scenes. So when someone works with a new team, what they're going to do is they're going to look up and say, “Oh, okay, I used to work with Sara a couple of years ago. She works with that team.” You go, “Hey, what's this guy like? What should I know about him?” Then she'll say, “Oh, well, he's really great at figuring out moving things forward, doesn't really get the big picture, great at team building, but kind of not great at strategy, or whatever.” So we're gonna gossip about that information. So it's not like even if you haven't directly interacted with that person, your reputation will likely precede you. So it's just a good idea [chuckles] to always act in a way that you'd be proud of.
SPENCER: It's funny the word gossip makes it sound bad. But if we just call it like large organizations thrive on communication about the way other people work. Then it sounds positive [laughs], so it's sort of a spin thing.
ALEX: The reality is that my direct experience with this; I was part of the Associate Product Manager Program at Google. As an APM, as an intern back in 2007, and at various points, I helped run or mentor a number of PMs in that. That APM program is really interesting because you have roughly 45 people a year, who were all picked out of undergrad, and they're all thrown into a situation where they don't have direct competition to one another, so they can commiserate and learn from one another (because they aren't trying to withhold information from others.) Then right after you start (one year later, or the next class starts) you are in that class, you're like, “I still have no idea what I'm doing.” But the next class comes to you for mentorship, and they go, “How should I do this?” “I have no idea what I'm doing.” “No, no, you know more than I do. You have a year of experience.” What it does is it creates these really nice interconnections across these years. It creates a really dense network of people who trust each other, who've been through similar situations that have gone through the trip that we do after a year. This creates a really effective gossip network, actually. So, you have a bunch of people far-flung across the company, who trust each other, or are very savvy and very good at picking up patterns. And that is a really effective mechanism for sending lots of information well throughout the organization.
SPENCER: So, going back to the main thread, you were mentioning the fundamental attribution error. Do you want to tie that in with the issues of large companies operating slowly?
ALEX: Yeah. I think that when everybody's working really hard in these large organizations, everybody's got 150% over capacity, they don't have time to think about it, it's really easy for people to jump to the conclusion of “Oh, that other team, those people just don't care enough,” or “They're just trying to get out of extra work,” or “They just don't understand ‘x.'” It's really easy to jump to that conclusion. What a convenient conclusion of “Oh, it's the other person's problem.” So, that leads to this almost default state that if you aren't careful, you will lead to a lack of trust. Over time, that will compound and it'll get worse and worse and worse and worse. An ambiguous environment is fast-moving. If you're already looking — already assume that another team or another person isn't going to do a very good job — you will find all kinds of signal in the ambiguous, the cloud of all the ambiguous stuff that's happening that will reaffirm that hypothesis. It's really easy to get stuck in this trap, where people aren't trusting one another across a large organization.
SPENCER: How much do you think small-scale tribalism comes into play here, where you've got your team, then there's other teams, and maybe you have different approaches to doing things or different goals for the organization or different priorities? And now you started getting a competitive mindset instead of a collaborative mindset?
ALEX: Yeah, like a zero-sum mindset versus a positive-sum mindset. I think these kinds of heuristics — these kinds of tribalism — they're baked pretty deeply into the firmware, and they show up in a lot of places. Alex Denko said last week, “Ignore human nature at your own peril because it's going to keep on coming back to bite you.” I think that's true; those kinds of forces do show up. One thing I would look at when I was responsible for these teams that were collaborating far-flung across the organization is that “Okay, let's see. Our team is responsible for an established piece of infrastructure that's messy and slow to move. And we have to be thinking very long-term about what's going to happen if we make this precedent-setting thing in two years. This other team we're gonna need to work with now is a greenfield team. It's thinking about the short term, about how to find an interesting thing with product market fit. So, those two cultures are naturally going to clash. So, just naturally knowing that — we're gonna be long-term focused, or even short-term focused — means less to invest in preventive maintenance ahead of time before it becomes a problem, to help us get to know each other. So, one of the rules of thumb is if people say, “Well they think so and so.” If I hear the word “they” — especially if it's tied to anything that is slightly negative — I'm like, “Okay, we got a problem.” I want us to get to the point where we say “us.” We have this problem. So how do you get these different teams to think of themselves as part of one unit to take advantage of tribalism as opposed to having it tear you apart?
SPENCER: Right, because as long as they think of themselves as two tribes, then your problem is there. But if they now think of themselves as one tribe, that may actually create trust and synergy.
ALEX: Exactly. So this is why — and people are talking about — there's lots of little tricks you can do for this kind of thing like, “Hey, come up with a team name that everybody uses.” Come up with a ‘get some swag', that everybody has the same thing. Doing offsite, where you're all together for a day. And of course in offsites, what's most important is the hallway times (make sure you have lots of it!) And also the dinner and these low-stakes interactions after the fact, before at night between the two days. So, those things people go, “We don't have time for that.” But if you're going to be working together, you actually don't not have time for that. You have to do these kinds of things because it will significantly improve the expected success rate of that collaboration.
SPENCER: It's interesting how you can combat tribalism at either extreme. So you can go all the way to individualism and say, “We're all just people. There are no tribes and every person is just their unique individual.” And then you can go the other extreme and say, “We're all part of one giant tribe.” And you can see this in companies, where they could either push individualism or we're all part of this giant corporate tribe. Or, you can see it in societies where you can have extremely individualistic societies, and you can have societies where it's like, “We all believe in this really strong shared set of values. And so we're all one.”
ALEX: Yeah, totally. I think it's tempting to think of us as one time, one person in one context. But of course, we're in overlapping contexts constantly. We're different tribes, different groups, we're members of this company, within this team. We're a member of this group, that knitting club. We're a member of maybe this religious association, this neighborhood group. So, we are a multifaceted people and we're embedded in lots of different things and lots of different identities. You can strengthen which ones feel most strong, which things feel less strong over time. One of my favorite anecdotes is this notion that before, I think roughly 1890 or so, the United States was a plural noun. The United States — the emphasis on states, the individual states — after around 89 years, it switched to be ‘United States'. It was a singular noun emphasizing the united portion of it. I think that's such an interesting example of, “Is the individual or the group, which one is more primary? Which one is more salient to most people?”
SPENCER: That's a really cool example. All right. So, we've talked about these n2 scaling laws. We've talked about the fundamental attribution error and tribalism that can occur in companies. So, what do we do about this? And maybe it's good to start with, what do people think you should do? What do you think they're wrong about? What do you think we should actually do?
ALEX: There's a presentation about this concept that created a number of various incarnations of it over the last few years. There's one now on my website called ‘Slime Molds'. It goes through and tells this story in an approachable set of emojis with a relatively quick pace story that kind of explains how this works and why it happens. Various incarnations of this had been read in different contexts by quite a bit of people. It's funny, people will come to me and say, “Oh, you're the Slime Molds guy. Great. Yeah, so that means this project is gonna be really slime molds. That means we're gonna need lots and lots and lots of spreadsheets and tracking processes and what have you.” It's like, “No, it's the opposite of what's going to happen.” I think that in this case, actually, what can we let go of? What are the things that we can relax or grip on, just a little bit, and be okay with it being temporarily a little bit messier, but on a path to ultimate convergence?
SPENCER: Do you want to explain the slime mold analogy?
ALEX: Yes. Slime mold is a type of creature that is the canonical example of, to me, a complex adaptive system. It's lots and lots and lots of different individual cells with no particular intelligence but acting as a colony, as a group. I was just not sure if it's even one, if it's technically one organism or a colony. But, it can spread and find nutrients and create tendrils out to those nutrients in extremely clever ways, despite there being no central planning of this thing. So in the slime mold deck that we're talking about, there's a screenshot of some researchers who got a slime mold to re-drive the Tokyo railway network. So they're surprisingly good at solving some of these interesting exploratory behaviors, despite not having any central intelligence.
SPENCER: Is the idea there that they were trying to solve some minimum time problem of, where would you place your railways in order to be able to get across the city as fast as possible or something?
ALEX: I believe so. It was some form of optimization problem. In general, I'm fascinated with complex adaptive systems. I think there are interesting lenses to understand a lot of phenomena that emerge, and you see these behaviors like, “Why does this thing happen this way?” The answer is the emergent behaviors of lots of individual agents making locally optimal decisions, or following very simple rule sets that create these extraordinarily interesting and intricate behaviors across the whole system.
SPENCER: So what's the opposite extreme on the other side of the far away from the slime mold model?
ALEX: So what's funny is this presentation talks about and uses militaries, traditional forms of military, command and control militaries as the canonical example. What's funny is, a lot of folks have reached out to me about this. I've had fascinating conversations with tons of folks, who tell me “Oh, no, no, that's not the way that modern militaries work at all. Actually, they're much more slime moldy than you think.” So, even militaries aren't a great example. Maybe a command and control style factory back in the early 20th century might be a better canonical example of the very precise, very regimented, top-down command and control style approaches.
SPENCER: Right, so almost no decision making at the bottom ranks.
ALEX: That's right. The bottom ranks are just automatons, effectively. Just executing on a script, in a way that has no particular autonomy or ability to respond to novel information at the leads.
SPENCER: Is this continuum really about where the intelligence in the system is? Is the intelligence at the top looking down? Or is intelligence at the bottom in each little point?
ALEX: Yeah, that might be one way of framing it, that gonna get to more bottom-up versus top-down. I think in practice, almost every system is some mix of them. It's not even a coherent idea. This is a bottom-up system. Whatever system, to some degree, does have some bottom-up characteristics, especially if there's a complex adaptive system. So in practice, it's always a mix, and looking at it from different perspectives might lead to very different conclusions. So you might find, for example, policy decisions being made about HR complaints or something is a very top-down style system within an organization. But product innovation within that exact same organization might be a very bottom-up process and a mix of those.
SPENCER: You see this tension in the US government as well, where some people want to centralize decisions more in the federal government. Others want them just to put them at the local level. You got these arguments, “Well, if the federal government does it, it can be more efficient in certain cases, and there can be more information integrated intelligently in some cases, and other people say, “No, it's the opposite. If you push it down to the low level, they can react to the needs of that location that can't possibly be known by the top level, and they can adapt to the unique circumstances that are on the ground.
ALEX: Absolutely. It's not like there's one true answer like bottom-up organizations are extremely hard to find interesting shelling points or other coordination mechanisms. So you get kind of this wild, very challenging to predict where exactly it's going to go. But they tend to be much more resilient, much more able to adapt to circumstances with way, way less likely to die, as one way of putting it. Whereas top-down control can create shelling points very actively, very straightforwardly, and very cheaply for things to coordinate around, but they can get very caught off guard by things. I think a lot of things are trade-offs — fundamental trade-offs — that we treat as black and white, or is some right answer. And when you see that basically everything is a trade-off, and once you realize the trade-off, you can start asking yourself, “Are we contextually at the right balance point right now? Because there is no one true balance point for every context, it is highly contextual. So then you get a sense of or you feel you're surfing that balance point, “Hey, are we a little bit too top-down or a little bit too bottom-up right now? Which one would you say?” And then if it goes, “Yeah, I feel a little bit too top-down right now.” “Cool, let's nudge a little bit toward that.” As opposed to people thinking of it as being this binary thing, this big huge shift that has to happen, but doesn't have to be. Actually, relatively small nudges continuously can keep a system in balance.
SPENCER: So, how do you see the slime mold idea applying in organizations?
ALEX: I think that people think that organizations are much more top-down, coherent, than they actually are. These bottom-up dynamics are way more prevalent and way stronger than people expect them to be. That creates a lot of surprises. You see this all the time. These environments are inherently complex. They're inherently nonlinear. They're inherently surprising, constantly morphing, all kinds of weird uncertainty is happening. You'll see people finger-pointing and, you know, finding villains, or pointing out somebody who's just not doing a very good job. I think by acknowledging that, actually, there are a lot of these bottom-up dynamics where everybody really earnestly is trying to do the right thing, helps people have more compassion for themselves, for one another, and hold on to these things a little bit less tightly.
SPENCER: What does this model suggest about how we should help organizations move faster and be more efficient?
ALEX: In a lot of times, when you have a thing that is a relatively straightforward problem, everyone can see exactly where you need to go. There's not very many complexities or uncertainties. We've done this a million times before, great. Go all in on a top-down approach. Have a lot of processes, structure, you'll get a lot of efficiency that way. When you don't know exactly what's going to happen, you don't know what the next thing that will have product-market fit is or how to respond to what your competitors are doing, and they're very fast-moving, it's better to have more of that focus, I think, in the bottom-up kind of innovation. Allowing that space for interesting things in the leaves to find interesting ideas. Once you find an interesting idea, you can fan the flames, you can accelerate it, you can invest more energy in it. People often think that strategy is having a really good plan that's really rigorous and well-researched and well-executed with a lot of focus. It's the opposite of strategy in many cases. Strategy, often, is creating the conditions that make it way more likely that you will have success. And that often is actually by creating lots of exposure to lots of different bets, lots of different seeds planted in different directions. Any one of which might start growing. And then once it starts growing, you can invest in it. But I think people often try to hold too tightly to strategy, and they try to hold too tightly to plans. Then they end up losing out on all these opportunities that are just all over the place around them, making their jobs way, way, way harder.
SPENCER: So what does that look like in practice to actually empower that slime-mold model inside a larger organization?
ALEX: I think it often looks like creating space for serendipity, creating space for a little bit of autonomy. One way of looking at this as having — culturally, Google famously had 20% time, and this is a well-known thing about how it worked. In practice, 20% time was always basically a cultural myth. It didn't really exist, it was always 120% time. But it did two really useful things, in my opinion. One, it allowed space; sometimes that space people would use it to do things like create a knitting club or something, some kind of social network thing. A knitting club isn't useful to the company per se, but what is useful is having a high trust, very diverse social network overlay that is different from the organizational structure, which makes it significantly more likely the right information flows to the right places at the right time. The second thing that 20% time did really nicely is a lot of things that have tons of indirect value over time are really hard to make a legible case to lots and lots of people succinctly. And I would argue they're fundamentally impossible in many cases. So what 20% time did in practice is someone would sit and be working on a lower priority item, and somebody comes over and says, “Why are you working on that?” And they kind of wave their hand and go, “20% time.” And a number of those things might turn out to be hugely useful and have tons of indirect value down the road. So maybe somebody builds a little prototype where they duct tape together a couple of different systems internally, just as a hobby project on the side. So one would look at it and go, “Wait a second. I think that would be a really good fit for this other thing we have,” or “Wait a second. We should get that in front of some users.” And then you suddenly realize, “Oh, wow, this thing is extremely successful, extremely interesting. We should make ourselves a real product.” By having that little bit of space, a little bit of awareness of, it's okay, culturally, for you to be doing a few things that don't fit in the broader plan, that creates the possibility of these little seeds to be planted by these people that might plausibly grow into something really interesting.
SPENCER: Why did Google discontinue 20% time? I think they shut it down somewhere like 2013.
ALEX: I actually don't know the answer to that. It's funny, as a company gets larger, you naturally are trying to make things more efficient. And I would argue that that is one of the factors that cause companies, in general, to accelerate their own obsolescence. So, you find the thing at the beginning, you find some product market fit, and it has exponential growth. You notice it's growing, it's spreading very, very broadly based on word of mouth or some kind of network effect during this exponential growth. Then, great, we found this amazing hill. We should start climbing this, and you climb as hard as you can, and you make efficient processes. At a certain point, every exponential curve has to hit its asymptote, at some point. They can't keep on going continually forever. There's just not enough space in the universe. So when you start hitting that, what happens is the hill starts dropping away from you. You start thinking and feeling about those organizations, “Oh my gosh, we have lost our mojo. The things that we've been doing in the past that have worked really, really well are no longer working.” At that point, what you want to do really is start investing in long-term stuff. Start seeing how you can plan for this product to get to the point where it becomes a mature product that you look at maintenance as opposed to fast growth. In addition, look around for other seeds, other hills that you might be wanting to climb. In practice, what happens is people freak out and they say, “Everybody, it's all hands on deck, you gotta work harder to extract more from this thing.” And as the hill continues to drop away from you, further and further and faster and faster and faster, the organization does exactly the wrong thing. It starts focusing more and more and more at driving efficiency. And, “Okay, we're going to have every spreadsheet that tracks exactly which features we think are gonna give new daily active users. We're gonna hold teams accountable for those, and we're gonna make sure that they're doing the right waterfall diagram, wonderful progress on these things.” That's exactly the opposite of what you want to do. There's a large product that I used to work on at Google that was wanting to grow daily active users as a mature product. Lots and lots of different user journeys, all kinds of stuff. The year that we had the highest daily active user growth was the year, actually, that all the various leads couldn't agree on what the priorities were. So everybody just, “I guess we just work on the P2s, I don't know.” So everyone just sanded down the rough edges and made various bits of the infrastructure a little bit faster or more resilient, or just added a couple of very obvious things to respond to things that users had done. That was actually where we saw the highest daily active user growth. So I think that people double down on plans, these big goals, when actually, honestly, at that point, it's probably better to allow a little bit more of those kinds of interesting ideas to bubble up.
SPENCER: It seems like if you empower people to have more decision-making at a lower level, it has these advantages, as you point out, but it also seems to bring on greater risks. It has risks that people end up implementing bad ideas that maybe didn't get approved at a higher level, or the people are just rowing in different directions because everyone has their own idea of what to do. So how do you mitigate those risks? Or maybe you don't think those are risks?
ALEX: I do think they are risks, and they're non-trivial ones. So the first one, the way I look at this, is capping the downside. How can you get a finite cap's downside while having still some exposure to uncapped upside and the potential for upside? So one way you can cap the downside is make sure you have a really great hiring bar, where everyone can safely assume that everybody there is very good at what they do, very collaborative, very hard working. That saves a lot of problems, if you can assume that everybody has actually as a baseline assumption. I used to work on the web platform team on Chrome, and there is quite a bit about security models. Every single web API that you add, you have to assume the content (the web page) is actively malicious and trying to hurt the user. And that forces you to be really, really, really careful about how you design those APIs. Obviously, it's done in collaboration with many other browser vendors. So if you cap the downside, you make it so it's not possible for teams to make that much of a problem for other teams, then you can allow some of this interesting stuff to bloom. It's really, really important you do this because there's tons of downsides that people don't fully understand. As a general rule of thumb, if a team is responsible and accountable for the space that it has autonomy over, you can assume that they will be making good self-interested bets (short, medium, long term portfolio stuff). But if they had a shared resource — something in the commons that they are benefiting from — they aren't necessarily going to be protecting that thing. One thing that is common across large organizations is the brand, the brand of the overall product. So one team doing this reckless thing that kind of, “Oopsies, we had a huge security hole and ended up leaking a bunch of deals, personal information or something.” Cool, that reflects poorly on everybody in the larger organization. That's a significant downside. So, for example, how can you make infrastructure that makes it really, really, really hard for people to do bad things with personally identifiable data? For example, wouldn't it allow you to have more confidence, and more trust that people won't hit these huge downside scenarios?
SPENCER: So would you say that that is largely an issue of aligning incentives? To make sure that, if people create a problem, they have to bear the cost of that problem?
ALEX: I think it largely is, but making it sound like it's just a matter of aligning incentives, implies that there is a way to align all the incentives. No matter what, everybody has intertwinings across lots of different teams throughout a large organization, almost without question. At the very least, you have a brand intertwining. So in practice, it's really hard to align incentives, because it's just not fully possible to align the incentives perfectly. You can definitely get better at it, to better capture some of the right balance, but it's just never possible to be totally perfect to solve this problem. Hence, one of the reasons you just have to trust the people with the organization and make sure you're hiring well and make sure that you have a culture where people feel ownership and accountability.
SPENCER: A while back in this conversation, you mentioned this idea of serendipity. One of the aspects here of creating a really effective organization is allowing serendipity to occur, right? If you have such strong top-down controls, where everyone's been told what to do, you miss out on opportunities for new ideas to blossom or unknown unknowns that take the organization in a really good direction. So how do you think about fostering that kind of culture where you can have new ideas and serendipity occur?
ALEX: I think it gets to allowing people to have this space to feel able to do — for many years, I have talked to a lot of different people across many different teams across Google. I've mentored hundreds of people over the years. At various points, I felt like it was just self-indulgent. I felt like, “I like talking to people, so I've just taken some time between meetings, and other more serious meetings, to talk to people.” It wasn't until many years later that I realized, “Wait a second. This is actually me almost stochastically increasing my luck surface area for the company.” So the more diverse that network of people that you know across the company — where you are working in lots of different levels, functions, product areas, what have you — the more likely that you will be at the right place at the right time, where you find the idea that has a huge impact for the company. So, you don't know which thing is going to end up being ‘the miracle' but you can stochastically increase the likelihood of miracles by doing things like novelty search, allowing yourself to spend some time building other networks of people and gossip networks, and getting to know different people across the organization, dabbling in things. If you have a lot of momentum, you have a lot of ideas, allowing yourself to go. One thing that is really important, is when people have intrinsic motivation, oh my gosh, it's unstoppable. It's amazing. They get in these flow states to do all the cool stuff. When it's an extrinsic motivation of, “Well, you have to do this to get promoted,” it really messes with it, and you get much less value out of it. So my rule of thumb was don't try to make people have extrinsic motivation for the things I cared about. Look for the people whose intrinsic motivation was already roughly pointing in the direction I believed was somewhat interesting and give them a little bit of encouragement. Fan the flames. Say “Wow, that's a really neat thing that you just wrote up. I wonder how that fits in with this thing?” Or “You should go talk to this other person who has a similar idea. Maybe the two of you can collaborate on something.” That was a very different approach. One of the downsides of this, by the way, is when it's working really well, it just looks like luck. So when you have these miracles happen, people go, “Oh, you just got lucky. And you're like, “Well, kinda, yeah.” But on a meta-level, I've been stochastically increasing my luck surface area. Some of the likelihood that I get a lucky break is significantly higher than if I didn't do this. Does that make sense?
SPENCER: Yeah, absolutely. So facilitating conversations between many different people seems like that's one way to increase the luck surface area, as you put it, right? And maybe also strategic empowerment, where people feel like they have the ability to act on ideas, right? They just have ideas, and they can shut down immediately. But what are some other ways you think about increasing this luck surface area?
ALEX: I think a large portion of it is a lot of innovation that comes from people combining existing stepping stones, existing ideas into novel combinations. The more diverse those things are, the farther afield those existing stepping stones are, they're adducted together, the more likely the insight or the innovation is significant and game-changing. A lot of it really does come down to allowing people to indulge, and do things that feel self-indulgent about like going down rabbit holes that they're super into or doing these kinds of extracurriculars or hobby projects. A lot of it really does come down to that. And I think that it sounds so simple like “That can't possibly be it.” That's certainly what I found [chuckles], is creating situations where people feel that they're able to thrive, that they're able not to try to be a cog in the machine, but rather to be the best versions of themselves, and do lean into the things that make them weird. I think that people look at weirdness in a large organization as bad. Weirdness just means that you don't fit in, that we don't want to measure you. The weird stuff is where all the cool things happen [chuckles]. So I kind of encourage people, “What is that thing that makes you weird in a positive way, and lean into that. Don't be ashamed of that.” They say, “Yeah, I'm kind of a weirdo about this thing. I'm really into board games. I'm really into this dabbling with this kind of thing.” “Cool. Great. Lean into that and find other people who might find interesting combinations with that.”
SPENCER: So I know you have this metaphor of a doorbell and a jungle, and that is a concept that relates to serendipity. Do you want to tell us about that metaphor and how it applies?
ALEX: Yeah, so this is another blog post, which, hopefully, we can link to the podcast notes. I see a lot of people have spent tons of time debating on strategies, where they're trying to develop something for developers who are highly motivated users and people saying, “Yeah. Well, that would be great, but no one wants that thing.” Then you just spend all your time debating, debating, debating, debating, debating about, how many millions of dollars we think it's going to be in five years, or all that stuff. The thing we have to do is actually very cheap. So you say, “I don't think anyone's going to crawl all the way through the jungle to try to come in through this backdoor.” Okay. But if someone did crawl through the jungle and come to this backdoor of our business, we would be super pumped, right? Like, “Oh my gosh, of course, yes. For sure, we would absolutely give them the thing that they wanted and build that product for them.” “It's great. How much does the doorbell cost?” “Five bucks?” “Cool. Why don't we put a doorbell next to that door so that somebody does crawl through the jungle to the back of our business, we can hear it and then respond to it. And then if they don't, it was five bucks, the opportunity cost is basically nothing.” So it's a pattern of, instead of focusing on what's the size of this thing, like cool, don't do the investment until you actually know that there is some demand. But it's relatively cheap to place these kinds of doorbells in the jungle all over the place that then allows you to react to concrete demand, without having to guess perfectly ahead of time exactly where that will be.
SPENCER: So what's one or two examples of real-life products that you feel did this?
ALEX: Actually, in the blog post, I listed a number of worked examples of this. One of the most simple ones of this is a developer signup form. So you put something in your documentation, and at the bottom, say, “Hey, if you're interested in a feature that does X, Y, or Z, put in your email address here.” That's a doorbell in the jungle because only people who are very motivated will have gotten down to that part of the thing and will express their interest. But if someone submits their email to that, reach out to them and say, “Hey, cool. Can you tell me more about your use case and tell me more about what you want?” Often, they'll describe to you what they're doing, you'll go, “Oh my gosh, wait a second. That's a really interesting use case we never thought about.” There's a number of other focus for this. Some patterns that I used to do a bunch were, you take features that are kind of geek mode features that you don't know if there's going to be a lot of demand for users to do it, and you bury it down deeper in your UI. So, you don't put it up at the front, you put it so you maybe three pages down deep inside of your UI. Then you make it so that motivated users who get there can try around and configure this thing and play around with it. And then watch the behavior, anonymous usage statistics, or UXR, or what have you, and get a sense of what those users are doing. Then you can expand that usage from there if you find something interesting. One of the reasons this pattern works, by the way, is if somebody crawled through the jungle, and got all caught up by all the bramble bushes and stuff, they had to be extremely motivated to get there. And there's almost certainly a number of other people whose pain threshold is just a little bit lower than that one. So if you reduce the amount of bramble bushes that people have to crawl through to get there, you will almost certainly unlock more demand in that direction. You get somebody who makes themselves known to you and helps you. Without you searching, they come to you, and they tell you what thing they want. And from there, it's just a simple matter of a hill-climbing exercise of just making it easier and easier and easier to use for larger and larger populations of people. If you ever stopped getting more demand coming in that direction, cool, stop investing in building features in that direction. You don't have to build stuff except things that respond to concrete demand.
SPENCER: So it's fair to say that this idea is around, if you're not sure whether to do something, instead of debating it endlessly, do some really cheap version of it that allows you to essentially assess the demand. And it doesn't even have to be optimized to get people to use it. You can just wait around and see if people do use it. As long as it's cheap enough to put it out there, the cost-effectiveness could still be really good.
ALEX: Yeah, exactly. And crucially, the other thing you want to do is any one of these doorbells that you put in the jungle — if you're gonna climb through the jungle to get there, there's very little likelihood, actually, that someone does that — so what you want is a diversity of them. You want them in lots of different angles, lots of different directions. They're almost a sensing network for you. You want to sense where the demand is, and that allows you to be in a reactive position, as opposed to a proactive position. So you don't want to just do one, you want to do a number of these in various forums in different directions.
SPENCER: So why don't organizations do this more often?
ALEX: I think it's funny. A bunch of these approaches are extremely repeatedly successful. They work really, really, really well in a number of contexts, and they have pretty large returns and generally very positive returns. I think the reason that they aren't done is that it feels so backward. It feels like instead of saying, “Well, I have a plan. And my plan is really well-researched, and we're gonna heroically execute on this plan. And I figured out exactly what's going to happen in the next five years,” you're saying, “I don't know. I don't know what's gonna happen.” And that requires vulnerability. It requires you to say, “I don't know what's going to happen specifically. But I trust, I'm confident that this will lead to good outcomes for us in general.” Honestly, if you're someone new in your career, for example, and you say this, “What the heck are you talking about?” The way that we always do it is we come up with a plan. We have spreadsheets, tracking the plan and interdependencies. Then we execute on that plan. You have to have some “strategy” behind it. I think what strategies are often better versions of them are metastrategies, these kinds of meta-approaches that you can take that give you positive exposure to luck. But again, I mentioned earlier that when you do these kinds of patterns in large organizations, when it works, people go, “Oh, you just got lucky.” So it's really hard to get promoted in these organizations. How can you make it legible, that you didn't just get lucky that you had actually structurally increased the likelihood of this luck happening? You can't run counterfactuals, and that's why it's so much simpler to say, “Look, we came up with a plan. We executed on the plan. We shifted that thing. Give me a promotion.” You don't know if you did a good thing, that users actually want or that's gonna have a ton of usage. So I think it just requires you to acknowledge that you don't know, to lean into this indirect value in a way that's really hard for people to reward or acknowledge in a very concrete way. So, I think it's like the world tilts in a certain direction and tilts towards more concrete, more specific, more plans. It requires you to put in some effort to tilt the world in the other direction. But then that's going to mean that you're going to stand out a little bit as an oddball.
SPENCER: So we've talked about a few different challenges large organizations have. One is this n2 scaling law: as you get more people involved, it becomes harder and harder to create communication between all of them. And I think your proposed solution was decentralizing things so that things can operate on a more local level. So you don't have to integrate all the information up the chain. We just talked about the fundamental attribution error, and how we jumped to conclusions about what other people think. And tribalism, there, you talked about creating trust just to combat this and creating a sense of team unity across the whole organization. Now we're talking about the doorbell in the jungle problem, where essentially, people want to endlessly plan and debate what's going to work. In fact, they could just do cheap experiments to see if there's demand there and scale things up as appropriate. So these are three different mental models for figuring out how organizations are going to get slower and slower as they scale. That is at least the way that I look at it. Is that how you think about these?
ALEX: Yeah. I think to tackle complex problems — these are the ones that have lots of uncertainties in them and are constantly morphing — I think you need three things. I think you need: 1) practical experience, like skin in the game, hands-on experience that develops this intuition (this know-how) that you can use to help you make better predictions in similar situations in the future; 2) a collection of theories, or lenses, or approaches or mental models that you can use to help lever up from your experience up into broader, more predictive statements. And importantly, it's not just a lens or a small collection, you want as many different lenses and mental models as possible, because none of them is sufficient, or is the perfect mental model that solves absolutely everything. You're gonna need different ones that will work in different contexts, and so you want a diversity of these; 3) what you need that I think is really important is a mindset. When you're staring into these environments — they're extremely uncertain — that uncertainty is existentially terrifying. And if you don't acknowledge that — acknowledge the emotional component of what it feels like to stare into this uncertainty — then you won't be able to solve it. In each of these individually, I think, are a dime a dozen. You can find tons of books written by people like startup founders, or whatever that says, “Here's the thing I did.” You can find tons of books and tons of blog posts that have their particular mental model for solving a problem. And you can find tons of things — self-help books, a ton of instructors — that is giving you this advice on mindset. But I think you need all three of those together. So yes, this is a collection of lenses that you can use, but I think they're part of a larger recipe for sustainably attacking these problems.
[promo]
SPENCER: One thing that we didn't really talk about, which I think of a lot when I think about larger organizations, is how your level of risk aversion changes. If you're a small startup, like if you just do the status quo thing, you run out of money, you die, right? So the default is sort of, “Yep, you're gone.”
ALEX: Default dead.
SPENCER: Default dead when you're just starting. When you're a large organization, the default thing is you just continue selling a lot of products and making a lot of revenue or whatever. So it seems to me that there's a natural tilt in the level of risk aversion. The startup has to do a bunch of new things in order to survive. Whereas the larger organization, if they just keep doing what they do, maybe we just improve very modestly, like, actually things are pretty good, maybe really good, so taking large jumps and taking large deviations are less essential, and there's also more to lose. They already have a goose laying the golden eggs. They don't want to mess with it too much, because maybe they'll mess it up. So I'm wondering what you think about that.
ALEX: Yeah, 100%, that is absolutely the right way of putting this. The downside grows as an organization gets larger, because if you just don't touch it, if you just keep on executing on the basic thing, it will continue putting out more money or more value or whatever, what have you. As you get larger and larger, your downside gets real. One of the reasons that it gets really hard to do things at large organizations is that different people in different teams see different parts of the elephant and a lot of different possibilities and things. There are tons of downsides. There are tons of things that are, “Oh, crap, that actually could be really disastrous if that happens.” So this means that when someone raises a downside, and says, “Well, actually, here's the thing that can make this really dangerous,” you go, “Oh, crap, that's a good point.” And they aren't wrong. It's not that they're missing the big picture. They're often very precisely correct. So it creates this environment, it makes it really, really, really hard to move because you get increasingly constrained because you do actually have downsides all over the place. “Oh, if that became successful, wouldn't that cannibalize our main business?” “Oh, yeah, crap it does.” So, you get stuck in this position where it's really hard to find things that don't mess with the main thing. One of the tricks for this is, for these doorbell-in-the-jungle style things, even at large organizations, the downside is often if this becomes extremely successful down the road, then it would cannibalize the main business. But that would also be sweet, right? Because that would mean very successful, and that we have all these other adjacencies and other ways of monetizing it, or maybe it helps make the primary money-maker sticky or something. A lot of these things can be reduced by saying something in checks or saying gates and saying, “If we ever got to the point where ‘x' became true or grew or was successful, we would want to pause and re-examine and figure out how this fits into everything.” But a lot of these are theoretical. It's very easy for someone to make a concrete case about a worst-case scenario, to make an evocative example like, “Well, here's an example for this other industry that did a similar thing, and they died.” And they aren't wrong; these are good. They're learning from other organizations. The value of a lot of these things is considered much more speculative. So as a general rule of thumb, if you have concrete examples or concrete arguments about downside and you have speculative or abstract arguments about potential, the concrete downside argument will win by default. That speculative value thing has to be 10x greater to win in the default conversation within one of these organizations.
SPENCER: Right, so we have increasing risk aversion, sort of at the organizational level for larger organizations. But I also think it applies at the individual level, where at a small startup, you've got the founders who they get to keep the upside if the thing goes really, really well. And early employees usually have some equity, even if it's a lot less than the founders. If you get to a really large organization, generally speaking, people have almost no equity, or maybe they have actually zero equity. They generally just are not going to keep much of the upside, but they will pay a downside cost, right? They try some new wild feature, and if it comes back to bite the organization because, I don't know, it's bad PR, or it leads to security vulnerability or whatever, they might get fired. But if it becomes the next big thing at the company, they're definitely not going to keep most of that upside.
ALEX: Yeah, right. This is, the ‘nobody-ever-got-fired-for-going-with-IBM' is that classic thing that captures this mindset like, “Whatever, everyone else is doing it. It's good enough for everybody else, might as well just do that.” I've seen this in many places in these large organizations, where you actually talk to everybody individually in a trusting one-on-one environment, where you're being very authentic and candid about what constraints you see, and absolutely nobody believes that the overall strategy is coherent or good or moving in a good direction. And yet, the entire system is moving in a direction that is good to one direction, that if you play that forward, doesn't that get almost without question to jump off a cliff? That way doesn't work, right? And you've convinced every individual person behind the scenes that's true. And we go, “Well, shouldn't your product be different from your sub-feature in this product?” Listen, I'm not going to change his entire ship, I'm not going to steer this entire ship. If I go and do something totally different that bucks the trend and goes in the direction that we probably all would individually agree is the right overall direction, what's going to happen is, I'm going to be fighting upstream the entire time to everybody. It's probably going to fail, and I definitely won't get promoted. Whereas if I add a feature inside of this bigger system, this whole ship that doesn't actually is going in a direction that doesn't make any sense, Then it's totally legible to the rest of the organization. I'll probably get promoted. So, I'm going to do that. It's like, “Ah!” But it totally does happen quite often. By the way, when a system is like this, it's in what I would call a supercritical state, where every individual person actually disagrees with the overall direction, but the overall system keeps on moving in one direction. These can have massive upheaval with the right inciting incident. The canonical example of this is the emperor has no clothes. Everybody can see with their own eyes the guy's naked, but nobody's gonna say because they don't get their head cut off, or they don't want to reveal that they aren't cultured or whatever. But then one kid points and laughs, and everybody now can see that everybody else can see that this guy's naked. And the entire thing changes into a totally different equilibrium. So when you find those supercritical states and organizations, one of the meta-bets you can make is that that will likely, at some point, the next year or two will reconfigure due to some inciting incident. And if you're aware of that, you can sometimes help find those inciting incidents and fan the flames. And sometimes just make sure that you are well-positioned to do the right thing for the company, when the inciting incident happens, whatever it might be.
SPENCER: It seems like this kind of behavior, where everyone's locally doing something that makes sense, but the whole system is not doing something makes sense, is a kind of drawback of the extreme slime mold model, where whenever one is locally optimizing, but nobody's globally optimizing. The whole ship might just go off a cliff, even though the people rowing are rowing really well. And the person who's doing lookout is doing lookout really well and whatever. But then nobody's making sure that everyone's coordinating with each other.
ALEX: Yeah. So in practice, actually, this is why you don't just have a slime mold, you also have to have a north star that everyone is pointing off of, that is actually plausible, coherent, and good. So if your north star is pulling you off over a cliff, like it's a bad north star, those north stars are very hard to move, especially in bottom-up organizations. Often, the north stars are coming from the top down. This is the official plan, and we're going this way. Okay, but everybody knows that this doesn't work [laughs]. So that's actually having a plan that is official, that everybody pretends that they believe but actually, nobody believes, is worse than having no plan at all. Because now, it's very hard to reconfigure, it's very hard for some of the bottom-up stuff to do little local experiments that pull it in a different direction. So in practice, you get this worst-of-all-worlds kind of thing. So, I agree that if you don't have some kind of coherent, plausible north star that everyone is cited off of, a slime mold by itself just goes bleh. It explores everything and does nothing; it can't coordinate anything. So you do need them to be pointing to a similar direction to get better things coming out of it. I think that people often think that those directions — those Northstars — have to be a lot more planned, a lot more top-down exactly features all plans in a big spreadsheet that's been coded in a massive planning process than people actually need to be. I have rarely seen organizations — in my personal experience but also talking to lots of folks from other industries who have reached out — where the north star is actually a good one. It gets really, really, really hard to do a north star that is plausible and good; a good outcome.
SPENCER: Why is that? It's not intuitive to me why it's so hard to do.
ALEX: A lot of the constraints are not obvious. They are hard to get to make legible to a lot of people. So one of my rules of thumb is, the way I look at this is if you just had a north star, and you had no plausible path there, you're going to end up designing castles in the sky, where are you going to build them and go, “Oh, wait a second. Gravity. It doesn't work.” And if you only explore through your adjacent possible, then you're going to get bleh [throwing up sound] just kind of exploring in no one direction. You need to have a balance of both. By the way, adjacent possible, does that track for you? Is that something I should explain?
SPENCER: Yeah. Why don't you say what that is?
ALEX: So adjacent possible is this notion of the set of actions that you as an entity, or as a person, as an organization, instead of actions that you might plausibly take in the next turn, in the next short term. So after you resolve all the different constraints around you about what things you might possibly want to do, your power dynamics in the organization, where you currently are, and what people currently believe. Your adjacent possible is a set of actions that you can actually choose from. Adjacent possibles are often way smaller than people think they are. So a lot of plans kind of say, “And then we're going to do this massive thing that we're going to ship in three months.” It's like no, wait a second. This is definitely not a new adjacent possible. That's gonna take us four years to do or whatever. So your adjacent possibles are often much smaller than people think they are. But the critical thing is once you pick an action in your adjacent possible, your adjacent possible shifts because you're now in a different spot. Okay, cool, we wrote the code for that thing and now we have a different set of actions available to us. So, if you sight through, what you want is your iterated adjacent possible do a smooth path that could plausibly deliver you towards your north star. So in practice, people have these heroic things that will just magically force-of-will themselves quite far into these massive jumps forward, which are basically not possible in practice. I would argue for various, fundamental reasons, but that's one of the reasons it's so hard, is people go, “that sounds like incrementalism.” Yeah. Fine. Sure. But directed incrementalism is the way that basically anything interesting happens in the real world.
SPENCER: What seems like incrementalism works when you're on a hill, and you're trying to get up to the top, right? You're making things better and better. It doesn't always work when you have to leap or maybe you have to go downhill for a long time to go up a higher hill later.
ALEX: I think that incrementalism often gets a bad rep because it's seen actually very specifically in that way. It's only good at hill climbing. I think incrementalism also — planting a seed, a really cheap little thing, a doorbell in the jungle, for example — is often really incremental. They're very relatively small things. “Hey, what if I took this open source thing and this existing piece of code and duct tape it together and played around with it?” That's a very small action. That is an incremental action. And yet, these incremental actions are finding interesting seeds that might grow into — multi-mix metaphors — a new hill. So I think that, actually, most things do end up happening in relatively small increments. Eric Beinhocker has a great book, Origin of Wealth, where he wrote a number of concepts — that are also talked about by a lot of other folks at Santa Fe Institute — that frames a bunch of these exploration problems as fundamentally fitness landscape searches via the algorithm of evolution. I would argue that this is what you see in machine learning. The deep learning techniques that we use today, why greatness cannot be planned by some machine learning, researchers make a similar case in different ways. So, I would argue that, actually, the vast majority of things really are found through incrementalism. It's just they're found in a way that has lots of different search heads across the entire space, which allows you to find lots of different hills incrementally if that makes sense.
SPENCER: Maybe there are two different potential definitions of incremental here. One is incrementalism, do something small, and I think that's what you're referring to. Another is incremental as in doing something in the same direction that you're already doing it. So maybe you mean, they're small actions that are actually taking you to a whole new direction that you haven't explored before, that they could be used to eventually show that that new direction is a good one?
ALEX: Yeah, that's right. I think that in popular imagining, those two definitions are mixed together, and intertwined. And I think they can be separate. Maybe I should say, small actions. Small actions repeatedly, with intention, or responding to interesting behaviors can cause huge things to happen over time.
SPENCER: So another thing I think about when I think about why organizations slow down as they get larger is that people specialize more. So imagine, you are a start-up founder — maybe have a co-founder — and you kind of have to do everything, right? Then you hire maybe a few people, and maybe they are taking some of the things off your plate, but you're still doing a ton of things. They probably also are doing way more things than they would be doing in a larger organization. Then when you get to a really large organization like Google, you might literally have a person whose job is to handle just one page on one particular product. That's super, super specialized. And it makes sense, because if you're Google scale, changing a button actually could have massive consequences. In fact, it could affect millions of users, right? If you're a tiny start-up, focusing on optimizing just one little button or one little feature probably doesn't make sense, because there's just so many important things to do that are going to have more impact at that level. And so, because in larger organizations you get increasingly smaller and smaller focuses, it means that sort of everyone is responsible for less and less. So, it's hard to take large actions, because if your whole domain is just this little thing, how are you going to move the whole organization?
ALEX: Yeah, that totally tracks to me as well. I think of this as when we look at specialization in that context, it's a form of structuring the problem to some degree. “Okay, you're gonna be responsible just for the precise drop shadows of this button or whatever.” Structure is a liability. It allows you to go faster and be more efficient. But it also means that he needs a change direction and is now a liability that makes it harder to change direction. The more that you agree to this structure, at each point, it's like, “Oh, let's make this thing that we're doing slightly more efficient.” Yeah, that's totally for sure. But then if the world changes, if the context changes — and it will definitely change because the context that you're in is always changing — based on what your users are doing, based on what your competitors are doing, based on what technologies are there, based on global pandemic. It is always changing. That means that that structure will be really, really hard to get out of because everyone will kind of snap back into that exact specialization. I think that generalists are super useful in these kinds of contexts. You can think of it as a balance. What's the balance of the overall amount of specialists versus generalists that you have across your entire organization? So to your point, it's mostly generalist at the beginning for small startups, because you gotta be. You got to be wearing multiple hats. As you get into a larger, larger organization, it gets more specialized. But how can you inject a little bit of randomness into that? You get this naturally when people move across the company. So if you encourage, Yeah. Hey, after spending a year or so on your project, it's totally fine for you to go find another project within a company if you want. That's something you shouldn't feel embarrassed about. That's great. That allows you to have a little bit of cross-pollination, a little bit of these ideas flowing through the system and slowly changing it over time. You will also sometimes find there'll be a departure of a lead that kind of is a big shock to a system. Those shocks can cause those systems to reconfigure and go into a little state of chaos for a bit, but then reanneal maybe into an interesting other shape. So I think that encouraging people to dabble, to be a little bit more generalist, to switch to things that's kind of outside their existing wheelhouse (“Maybe I'll switch to a role that does that.”) is a really useful thing to do. I think it was GE that famously had this program that would deliberately cross-pollinate a small number of people across very disparate business units as a way of injecting that structured noise. And you can think of 20% time in a similar way of deliberately inducing or injecting a little bit of structured noise kind of.
SPENCER: Alex, thanks for coming on. This was a fun conversation.
ALEX: Yeah, thanks for having me.
[outro]
JOSH: A listener asks, “What's something that's not directly related to your work that you'd recommend to anyone, especially if it's something that seems random or unexpected coming from you?
SPENCER: I find it hard to generate things I believe that seem random or unexpected coming from me to me at least [laughs]. That part is hard. I will say some things that I'd recommend to just about everyone. I mean, obviously, there's almost no advice that applies to literally everyone, but one thing is avoiding enemies. I think that having enemies is very bad. There's a huge amount of negative-sum activity where people damage each other. So that's something I try to do. There are times when it's justified to create enemies, right? There are times when you have to stand up for someone and that creates an enemy, or you have to combat some evil in the world that's perpetuated by a harmful person. But, I would say avoid enemies unless you have a really, really good reason to. It just creates too much drama, and it wastes time and energy. A second one, which is sort of related, is just paying attention to the way that the people around you cause you to behave. If the way that people around you cause you to behave is more like the way you want to be, then I think that's a positive sign. If it's less like the way you want to be, then it's a negative sign. I think that's just a really useful thing to observe. And then the people that, when you spend time around them, they make you behave less like the kind of person you want to be, ask yourself whether that's really the kind of relationship you want. Maybe it is, but I just think it's an interesting question. And I think it can be a route to realizing that someone is an unhealthy influence and actually not a good person to have in your life. A third thing, I would say — maybe this is extremely expected coming from me — but I think almost everyone could benefit from self-experimentation, where you basically try things in a systematic way. Where you're like, “Okay, something I want to work on. I want to work on my sleep. Okay, what can I do for my sleep?” and you come up with a few ideas, and then you pick one of them. You try them, and you see if it helps you. Of course, this is an imperfect process, like there's placebo effects and self-deception, and all those things. But, I think this process, while it is somewhat error-prone, can lead you to some really good changes in your life, especially if you just make a habit of trying a new thing every month or two. I try to run a self-experiment pretty much at all times. I pretty much always have one going, sometimes elapsed for a little while, and they add up. Most of them fail, but then they add up over a lifetime. You can learn some important stuff. So those are three things that come to mind.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: