Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:
June 13, 2024
How does Emmett's worldview differ from the standard Silicon Valley worldview? What's the difference between an ideology and a worldview? What's middle management useful for? How might democracy be improved? How important is optimism? Why do people seem to get less done each day than they expect to get done? When is high variance beneficial? Does every startup have a point where it seems like they're going to fail? What's the best and worst startup advice out there? What's the right way to learn from users / customers? When should companies follow trends? How should we think about the different types of AI risks?
Emmett Shear is an entrepreneur and investor. He was part of the first class at Y Combinator in 2005. He co-founded Justin.tv in 2006 and its spin-off company Twitch in 2011. In the same year, he also became a part-time partner at Y Combinator, a role in which he continues to advise new startups. He very briefly (for 2.5 days) acted as interim CEO at OpenAI in November 2023. Follow him on Twitter / X at @eshear.
JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast, and I'm so glad you've joined us today! In this episode, Spencer speaks with Emmett Shear about: intersubjective concepts and distributed systems, optimism bias, and leading successful tech startups.
[promo]
SPENCER: Emmett, welcome.
EMMETT: Thanks for having me.
SPENCER: You have so many ideas. You're just bursting with ideas, which I really find fascinating and I really enjoy following you on Twitter. Let's just start with your worldview because I've heard you say that you want to start putting the Emmett worldview out there in the world. So what is the Emmett worldview?
EMMETT: The hard part about that is that it's not very compressible, I realized. Maybe that's the heart of the Emmett worldview, actually, that worldviews are big, complicated, messy gestalts matching the world itself, which is this big, highly-detailed thing. You can't easily compress them into things that you can just say. The world, in general, isn't made out of simple things where there are easy, simple answers. And yet, of course, the only thing that's really valuable is figuring out how to compress something into a short thing that you can convey.
SPENCER: And yet it feels like so many worldviews are these incredibly simple things like "capitalism bad" or "capitalism good." Isn't that what a lot of the worldviews we see out there are?
EMMETT: I think that I draw a distinction actually between an ideology or an ideological commitment and a worldview. Worldviews, I think, you can only absorb through spending a good deal of time with someone. And usually, they can only be transmitted almost one-to-one, although I do think someone who writes enough on a sufficiently broad array of topics can eventually start to absorb their general approach to things. But usually, you actually have to get it in person because the worldviews — the way someone actually engages with and sees the world — is something that stretches across more than just a single domain or even a small set of domains, whereas ideologies tend to be fundamentally simpler. They're a single lens, and unless you've been totally captured by a single ideology, it's very rare to find a person whose worldview is actually just this one lens over and over again.
SPENCER: That makes sense. And it makes me want to take this opportunity in the few minutes with you to try to extract as much of your worldview as I can. Maybe a good place to start is to diff it against the closest worldview. Is the closest worldview that standard Silicon Valley view, and then you can take a diff on that?
EMMETT: That might be fair. If I was to describe the standard Silicon Valley view of the world in a horrible stereotype, it would be: markets are good, technology is powerful and mostly a force for good, and both of those things are true because, at an underlying level, humanity is fundamentally mostly good. Therefore, when given more powerful tools or technology, when given freedom, while we do both good and bad things, it mostly trends towards good stuff. The most wonderful time is in the future, where we're building towards a utopian future. The past is fine but not as good as today, and the future will be yet better. There is a march of progress throughout history. And I think the other final bit I would add to the standard Silicon Valley worldview is that variance is good. Sure, there are downsides, but the upside of positive variance outweighs the downside of negative variance. And if the cost of good things is also some bad things, then we'd rather have higher variance than lower variance.
SPENCER: So does that relate to destruction or creative destruction?
EMMETT: Yeah, well, it shows up in a bunch of things. Variance in company outcomes is good because the newer technologies, if you put a limit on how big something can get, you cut off a lot of the power law advantages of something getting really, really big. At the cost of that, you take a lot of risk, and a lot of companies just totally fail and die. We should just be accepting of the failing and dying part in exchange for having more big wins. So I think those, as general principles, are all pretty decently good. I would also say, to view the political side, democracy is good, voting is good.
SPENCER: But maybe also that there's a role for people to do things outside the system, like for a billionaire to have a positive influence and not just a negative influence.
EMMETT: Right. I would say it's almost like the traditional story of America, story of American worldviews. That's like, "Yay! Democracy and capitalism together. Freedom. People should do what they want, but the government should be there to create a level playing field and safety. Beyond that, you should trust people with freedom, and that system is going to work great." The place where the traditional American civic religion may be in some ways most alive is still in Silicon Valley still. I think it's not a bad approximation of my worldview; it's kind of close. When I look back on it, I think my evolution of my worldview started out, I very much bought the standard story. Then I went through this whole personal journey around it, then I came back to very much believing the standard story again but for more sophisticated reasons maybe. Like, I went on that full hero's journey or maybe, for a lot of people online, that would now be the midwit meme of believing it for dumb reasons, doubting it for complicated reasons, and then believing it for more complicated reasons in a cycle.
SPENCER: Can you tell us about that journey? What did that look like in practice?
EMMETT: You're gonna have to break it down piecewise. So an example would be like money. I remember, at first, thinking money made sense. You have money, you trade it with people for stuff. That's the naive view of money. You need money to get stuff, and other people give you money for stuff from you, and it all works. I was thinking about it. I remember being around nine or ten and just getting really confused, like, "Wait, why does my dad have to go work for money? And then he brings money home, and then we use the money to go buy stuff? Why doesn't he just go to work and then we just take it from the store? Why do we need all this money?" It seems like an unnecessary source of complication. Why does it even exist? It just seems like this pointless worship of this stuff that, when you look at it, has no real substance to it, right? Money is just like dollars. Even if it was gold, it's just some shiny metal. Who cares?" And I remember wrestling with that idea for quite a while. I would talk to my dad about it. I remember I had a friend, Ira, and we'd go on these long walks, we'd play catch, and we'd argue about it. It was a weird thing for ten-year-olds to argue about, I think: money, capitalism, and the structure of society. Over the next three or four years, as I talked about it more and thought about it more, I came to realize, "Oh, I see. How do you know which thing to go do? How do you figure out how much of which stuff you can take without running out of stuff?" And I designed this whole system and then realized it was equivalent to having money again and was like, "Oh, I get it. I see. That's why we have money." I think most people at some point go through that sort of process of just accepting money as this thing that just existed and, of course, you have to use it, to, "Wait, money, dollars? Those are just pieces of paper." It's classic sophomoric thinking, like the college sophomore, maybe the high school sophomore has this insight. Maybe I had it a little bit young, but not that much younger than most people, I think. And then you come out the other side. I had a similar journey, I think, with democracy where obviously democracy is good because it gives people a voice. You don't want to be ruled by an evil dictator; you want to have a good democracy where everyone is equal and has power. Then I encountered the real world and started running a company. I was like, "Wait a second, if I tried to run this company as a democracy, it would be terrible. It's obviously not the best way to make individual decisions or even to select a leader. It's just not clear that having a popularity contest is a good way to go about doing that. At the very least, you should maybe try to have some sort of test of skill." That really bothered me. "Wait a second, we run our government based on popularity contests? What the hell are we doing?" Obviously, it would be better to find some way... and every now and then you get a dictator who does a relatively good job. And so the problem seems to be, "How do you just get a good dictator?" Yeah, yeah, there's this issue of succession when the good dictator dies, you seem to often not get a good one next. That does seem like a problem but, surely, we can just solve that problem. I thought about that for a long time, and I don't think we have the time to get into it in great depth. But eventually, I came to the realization, "Oh, if I go through the process of trying to select the good dictator and game out all the routes around it, you wind up coming back to something that looks basically like democracy again." You wind up reinventing democracy from first principles if you think about it long enough, if you take the succession problem seriously. That was the second realization, "Oh I see." You don't necessarily need exactly our system of democracy. I'm sure there's other systems that could work, that could solve the same problem. There are multiple systems of money that can work. But you need something that fulfills that role. And it's going to look at the end of the day, something kind of like democracy. If you don't have that, at the scale we're at, you wind up with an unstable system in the long run.
SPENCER: A bunch of what you write seems to touch on this idea of what's sometimes called intersubjective concepts like money or democracy. They're things that only make sense because people believe they make sense. The moment that people stop believing money's worth anything, it's no longer worth anything. The moment people stop believing America is a place, it's no longer a place. I'm curious why that seems to be a theme for you.
EMMETT: In general, I'm interested in those intersubjective things because that's how coordination mechanisms work — when the information is in the network, not in the individual. You think about, in your body, the way your cells communicate, the most interesting thing in terms of understanding the human body is, at least for me, all the intracellular and inter-body signaling mechanisms, the hormone systems, and the nervous system, which are signaling mechanisms where they coordinate things. Democracy and money are coordination mechanisms to enable multiple independent agents to cohere into a direction that makes sense together and to solve the problem of how the individual component knows what to do without full information of the whole. Even in a system like the global economy, I don't know that it's possible to create a closed-form solution even if you had a very powerful computer — the most powerful computer you can imagine — I think trying to solve the problem in a closed-form way may be beyond what we could ever do. So you have to figure out systems for local coordination that create good global results. There's sort of like only two kinds of problems in the world: technical engineering problems, where you have to figure out how to do something efficiently with less resources, and coordination problems, where you have to figure out how those engineered or evolved components can be used together when they may not share full information. Personally, I find coordination problems more fascinating, even though I enjoy engineering.
SPENCER: It seems like society is much worse at the coordination problems, at least that's my impression.
EMMETT: Yeah, coordination problems are the hard problems. Engineering is easy in comparison. You can do more with it. You set a higher bar for your engineering because you have control or you know all the things going in. You have a singular sort of God's-eye view; you get to design the whole system top to bottom. Whereas, in coordination problems, you're doing engineering but in this messy world where you don't have all the information, and where you can't rely on the other components in the system to agree with you or to do things the way you want them to. The other coordination problem I remember very sharply running into was management. When we started Justin.tv, I was against management. I read the Google thing about how they had super high management fan-out. This is a common Silicon Valley thing is thinking, "We don't like hierarchical management systems. We believe in distributed management, like capitalism and democracy which are these fully distributed systems. We don't need a hierarchical management system." And I really bought into that, "What do middle managers even do? Why do you have these people? Why do you have management at all? People should just know what the goals are and work towards them." But if you've ever tried to scale a company up without management, you rapidly realize that's not a very good idea. There's this great essay, "The Tyranny of Structurelessness" that was written about feminist groups, but equally applies to corporate America, that goes into great depth about why removing the official hierarchy does not get rid of hierarchy but merely causes it to go underground. It was a real learning, "Oh, that's what management is for." Let me see if I can explain the two-sentence version of why management is important. Management is important because the CEO has some vision of where the company is going, and the people on the ground have some understanding of what's actually happening. You need both a top-down and bottom-up information propagation mechanism to break the vision broken into pieces and passed down and consolidated information passed up. The amount of work required to have a full 360 point of view of what the company is doing in the strategic vision is almost a full-time job in itself. So, people who are actually doing work can't hold what everybody else in the company is doing in their head. And so, you need a mechanism for passing information up and down, which is what management is for. It's to abstract details up and fragment the vision down and to hopefully help reconcile the gaps, so that the vision changes in response to real facts happening at the edges, and so that the people at the edges understand how their work fits into the big picture. Obviously, management often doesn't succeed at that goal, but if you tried to get rid of it, you wouldn't get magic collaboration; what you'd get is utter confusion.
SPENCER: Could the same be said for democracy, that it's a mechanism to propagate information from the people up to those in charge?
EMMETT: That is a common view of democracy. But people who think that's what democracy is for wind up disappointed in democracy often because your beliefs about what policies should be passed don't seem to be reflected very well, and getting to vote for Biden versus Trump when you don't like either of them. If it's supposed to be aggregating your opinions about policy, it does a very bad job of that. Why don't we ask you about your policy opinions then? Why are we running a popularity contest? And the answer is that democracy isn't really designed to do that, nor should it, because most people, myself included, don't have well-informed opinions about most policies because we don't have the time. To have a well-informed opinion about housing policy, you have to spend a lot of time on it. I think I have a well-informed opinion about housing policy specifically because I spent months and months thinking about and studying it. But I don't have a well-informed opinion about energy policy. I have a vague one, but I wouldn't trust myself to write any rules about it. That's why representative democracy is a good idea. Democracy's point is not, in fact, to aggregate people's understanding of the world. You'll notice in democracies, incumbents almost always win. It's very consistent. Once you're an incumbent, you by default win over and over again. That's a feature, not a bug. You don't want to churn leadership all the time. Healthy leadership, when it's working, should continue as long as possible because you've got someone in the seat who is working well enough. If you churn leadership over all the time, you get chaos, which is actually very bad. Democracy's point is, when it gets bad enough, you can evict the bad leader. That's why democracy is good. Ultimately, it allows you to remove the bad leaders. It's a safety release valve, not a direction-setting device. And trying to make it more than that tends to end badly. I have a private belief about democracy. My crazy belief about democracy is that we should switch the frame from "Who do you like more, voting incumbent versus challenger" — which is how most elections go — to every election, the incumbent wins by default unless they get a vote of no confidence. It's, "Do you want the incumbent, or do you want to vote that person out? If you vote the incumbent out, then we have an open seat race for who wins. But if you don't vote the incumbent out, they just win by default."
SPENCER: Do you think there should still be term limits?
EMMETT: No. I think term limits are a dumb idea, basically. This idea that somehow the problem is people staying in power rather than the underlying structure of the system. What term limits do is empower the bureaucracy at the expense of elected officials. I don't think that's a healthy dynamic because, when you have term limits, the people in the bureaucracy know that all they have to do is just wait you out. If they can drag their feet long enough, they'll outwait you, and the next guy won't have the same ability to pressure them. I don't mean to blame the bureaucrats because sometimes the things that elected officials tell them to do are stupid. But there's a balance of power, especially in the modern state, between the agencies and the elected officials. We've tilted it toward the agencies over the elected via things like term limits, and I don't think that's healthy. It also empowers the party at the expense of the individual elected as well, because you have to keep moving up into future seats, which means you need the support of the party. That's unhealthy. It causes way more political polarization and requires everyone to fall in line because if you go against the party, you can't win. And that's also unhealthy. So, I'm pretty against term limits as a scheme. I actually think this move to no-confidence voting would fix the issue that term limits are attempting to fix. The problem with running incumbents versus challengers is that it's hard to get good candidates to run against incumbents because it's a low EV move. The incumbents will probably win, so the most attractive candidates wait for an open race. That's unfortunate because that means you get weak candidates running against the incumbent, further cementing the incumbent's advantage, because it's like, "Yeah, I don't like the incumbent, but this (other) guy looks terrible." And so, switching to a no-confidence vote would result in higher turnover for incumbents because it's easier to say, "No, I don't like Biden, I want to vote him out," than, "Oh I want to vote for Trump specifically." I have this theory that democracy is powered by disliking candidates more than liking them. And so, you want a system that lets people express their dislike. Let's lean into it.
SPENCER: I think what worries me about the idea of having no term limits is that, eventually, someone figures out how to capture the system and then they capture it for the rest of their life.
EMMETT: How do they win the no-confidence votes?
SPENCER: Well, how do dictators in various countries have infinite term limits and yet seem to always win at the (quote-unquote) polls?
EMMETT: I'm okay with term limits for the president and the governor. For the executive branch, I am pro-term limits. I think you're right. Allowing an executive to stay in power for too long is dangerous for a bunch of psychological reasons. People start to accept them as sort of the king over time. I think there's a reason why Washington resigned and why we've had term limits on the presidency for a long time but not on the Senate or the House of Representatives. It's a nuance that, in the executive branch, it's very dangerous to leave the same leader in place for too long. Eight to ten years, a decade, is probably a good length. You don't want to go too much longer than that. For what it's worth, for the presidency, I would just make terms eight years long and the whole 'running for reelection as a sitting president' thing is stupid. I think we should just not do that. But I think that's a minor tweak. Most elections are not for the executive. You have one executive and lots and lots of legislators. For legislators, term limits are bad. Senior legislators are better at legislating; it's a skilled thing you learn how to do.
SPENCER: So stepping back, one criticism we could have of your worldview is like, "Wait, are you saying you're just pushing the Silicon Valley worldview, except that you have a more nuanced reason to push it than the average Silicon Valley person?"
EMMETT: Yeah, I think that criticism is somewhat fair. It's not entirely unfair. I absolutely am aligned with a lot of the things that Silicon Valley broadly believes because I think those things are good. I think that the truth is technology is good, that it raises the standard of living and allows people to live better lives. I think that we can build a better world in the future. The past is easy to romanticize, but actually, it was kind of a shitty place to live a lot of the time. And I believe freedom is one of the most important values. I think those are all true. And so, at a very high level, yeah, I'm pushing the Silicon Valley worldview and the Silicon Valley gestalt because I think it's right. The reason why I moved to San Francisco — because these are my people, this is what I believe in. But I think one of the main points of divergence for me from the traditional Silicon Valley worldview is that I'm fundamentally not a utilitarian consequentialist. I'm not a high modernist. There's this idea that you can design the right system, and if we just figured it out, you could find a list of the best charities to fund and rank them, and there's a correct answer to that, that is global and universal. There's an idea that you can build the right operating system or phone, that there's a correct answer, that there's a system you can impose. You see it recently with Brian Johnson talking about throwing out all the existing superstitions and redesigning humanity's worldview from first principles. I love first principles. I love designing from first principles. But this idea that you can rationally create a system of the world that will finally give you the right answer and the correct direction is delusional and dangerous.
[promo]
SPENCER: So when you think about this consequentialist utilitarian kind of point of view, which premise do you actually disagree with? Or where do you get off the train in that kind of reasoning?
EMMETT: I believe it underweights how uncertain we are and how much of our information is trapped in these local bubbles. As a result, it causes you to try to create universal and global things rather than local and effective things. It's not that you should never think about the global context of your actions, but I think it underweights this dramatic need for humility and groundedness about the fact that I only really know what's going on in San Francisco and maybe in a few other cities in America at any kind of detailed level because that's the city I walk around in all the time and see. I understand San Francisco pretty well because I'm in at least two-thirds of the city, biking around, walking around, seeing it, being in it. I have lots of friends in it, and they're there too, and I'm getting all this rich data about it. So when I talk about trying to reform or make San Francisco better, I'm speaking from this position of real, deep embodied knowledge. I'm not just trying to change it arbitrarily. I don't have some universal theory of the world; I have a theory grounded in reality in a very deep way. When you go and try to do foreign aid for something in Africa, in a place you've never been, especially if you do it without going there... If you want to do that, go there, look around, find out what's happening. It's not that you should never try to help, but you have to be so careful. The burden of work to really make a change the farther away you go, goes so far up because you know so much less and you're so much more likely to fuck it up. If I were to describe the difference, I think I've come to really have this point of view that you should focus on trying to make your personal community a good place to live, and for everyone you know and you connect with personally, for that to be going well. And then, if you can do that, maybe your neighborhood or your city will try to invest in that and make that a great place. If you feel like your city is going really well, start reforming at the state level and the country. When your country is a shining beacon of light to the world, you can start trying to reform everybody else's countries.
SPENCER: I feel like San Francisco is the ultimate irony on the point you just made.
EMMETT: There's this big distinction between trade and cooperation and altruism. Going to go try to help them unidirectionally versus trade with them. Trading with anyone who wants to trade with you globally is wonderful because you can trust that they're deciding on their end, they're giving you something you want, you're giving them something they want. You don't really have to understand them super deeply to make that trade effective. You can trade with a stranger, and that's wonderful. That's one of the beauties of some of the underlying ideals of freedom and capitalism, that trade with strangers is possible. The dangerous thing is when you go in and inject a bunch of energy into a system where you're not getting reciprocal energy back out because you think it's going to help. But who is it going to help, and how do you know? They're not willing to pay you for it. So it's very hard to tell if what you did is good. It might be, and sometimes people do altruism that is very effective. But sometimes people actually do manage to do effective altruism. They do manage to go out there and put money and energy out into a system globally, and it makes a big positive change. A really obvious example of this is the polio vaccine. That's such a good idea. I think the reason it's a good idea is it's very straightforward. People getting polio and dying is bad. The vaccine has few side effects, as far as I know, very few, and works very effectively in preventing people from suffering from this terrible disease. There are really no trade-offs. The only trade-off is having enough resources to do it. So you can very confidently go and spend a bunch of resources getting everyone vaccinated against polio. And that's a good thing. It's hard when you start trying to promote democracy because there's a whole system of power in place, and you don't understand it. You're not there. You know what's being presented on the surface, but as we all know, the surface presentation of power and the reality of power are often very distinct from each other. It's just really hard. And so I think some real humility is needed when you want to go act on the global stage, real humility in scoping down your actions to the things where you're confident you will actually be helping. You're not trying to maximize your utilitarian impact. You're trying to act in ways where you have a more full understanding and where you can relate to the people you're helping and understand it more directly.
SPENCER: I feel very conflicted about what you're saying. Because on the one hand, I think you're absolutely right, we need to have epistemic humility. If we're going into a place that we don't understand deeply, there can be all kinds of problems. And these kinds of problems occur all the time, where well-intentioned people actually end up causing harm because they don't understand what's really going on. They don't understand the local conditions, they don't understand the lives of people and so on. On the other hand, you see just a huge amount of ineffective local work, where it ends up being ineffective for very different reasons. It's ineffective because people are just focused on the warm fuzzy feelings that their work gives them or they're just latched on to one type of way of helping, and they completely ignore the evidence. They're not looking at randomized controlled trials about what really helps people. And so I see these as both very, very big problems that need to be navigated.
EMMETT: I think you have to remember, I'm coming from a world where the idea that you should look at the evidence and data and results matter is like in the water. That's just undisputed. Of course, you have to do that. The question is, "What's the scope you do that at? Where do you do that?" I think about it a lot when advising YC companies. There's this tendency for founders to imagine these poor workers inside of these big companies and imagine what they need, and then try to go build stuff for them. And that never works. What works is you have to go and meet them and talk to them and get to know them, and then think about their problems to the point where you understand their own problems in their own company better than they do. The successful founders go into an enterprise where the other companies are gonna be their customers. They understand the real political power graph, the real org chart, and how software actually gets made in the company, better than the people inside the company do because they get to spend all their time just thinking about that and talking to people and learning. And then they build a solution for that. If everyone who wants to go help a poor community went and spent months within that community, talking to people, learning from them, proposing things, trying little intervention, seeing if it helps, doing things by hand first, and then they went and they were like, "Okay, I know what we need to do here, I'm gonna go build this thing that I think will be helpful." That sounds great, A+. But when you think about people talking about doing foreign aid, the percentage of them who talk about helping others who have gone and honestly truly spent a significant time with those people directly before they go and propose it, is just very low. People just don't actually do that. I've learned through YC how rare it is, because it's like one of the main things we have to teach founders they have to do. They're heavily economically incentivized to get it, and they still don't do it. Let alone nonprofits where there's no strong incentive to get results other than your own desire to get results, it's no surprise, they don't do it either.
SPENCER: One thing that I see very strongly in Silicon Valley that I see less strongly in other places, including New York, is this idea that we have to be optimistic, that optimism is an asset. I'm wondering what you think about that?
EMMETT: I actually have always been a very optimistic person, and I've always felt it was an asset. I noticed I was confused about that for a long time; it seemed irrational. Shouldn't you want to anticipate the most accurate results? Neither an optimistic bias or pessimistic bias seem both bad, right? Because surely, you want the most accurate possible expectations always. And then I started learning about Active Inference, which is this idea I got very interested in by this guy Karl Friston. It's been bouncing around for a while. There's a great Slate Star Codex blog, "God help us, Let's try to understand Friston on free energy." It's not the easiest concept to understand, but one of the concepts is actually very easy, which is like, you have this idea of agents. An agent is just any being — an agent is a human being a lot of the time; a neuron is also an agent. We'll just take a human as an idea, or an animal. — And agents have a set of expectations about what they will observe in the future. And then, they take action in order to make those expectations come true, more or less, because they don't like being surprised. Now, one way to make expectations come true is to alter your expectations by understanding the world better, such that you more accurately anticipate what will happen. But actually, when you have expectations like, "I will maintain homeostasis, and I will be neither too cold nor too hot," it's very important to have an optimism bias. The optimism bias is, if I just randomly wander around the world, my expectation would be that I will probably be too cold or too hot a lot of the time. By anticipating that I will not be, instead of just expecting to be uncomfortable, I take action towards fixing it. I would say the ultimate optimism bias is that organisms — agents — anticipate their own survival at a deep level. We anticipate ourselves persisting into the future. And agents that don't anticipate their own survival against the entropy guarantee and against the default thing that will happen don't last very long, because they don't take action to maintain their homeostasis. There's sort of two levels of optimism bias. It is possible to have a bad optimism bias or it's possible to be just delusional — that manic state where the things that you are optimistic about are impossible, unreachable. You make bad decisions based on it because you're setting yourself up to fail. But another way to see optimism bias is just like target fixation. If Elon Musk doesn't anticipate or believe, "Okay, there's a pathway to this Tesla thing, I'm going to start taking the actions that lead walked down that pathway because it is reachable," you never get there. You have to be pulled towards and have your actions walking down the pathway towards this outcome, and for you to expect it into reality. I guess another way to put it is like, manifestation is real and the Secret is true; it's just way harder than people give it credit for.
SPENCER: Right, but it's not just believing. It's like believing is the first step. It's necessary, but far from sufficient, right?
EMMETT: Well, it's interesting. There's believing in believing. Like, one of the best ideas I think that come out of rationalism is the idea of an alief as opposed to a belief. An alief is something you profess, but don't anticipate. So, you profess to yourself, you verbally say to yourself, "Oh, I love going on roller coasters." But before you go on the roller coaster, you actually have this flinch, and you don't really want to do it. You don't actually love it, but you've told yourself you do. You have a story that you do. Telling yourself a story that you'll be successful or that you'll get the promotion is totally unhelpful, actually. It leads to mostly bad outcomes, especially if you don't actually anticipate it. But actually, I kind of think, truly, in a deep, rich way, anticipating the success — if you can get yourself to actually anticipate it — is actually causally related to you getting it and maybe the only thing you need to do. Because to truly anticipate it, what that leads to, what that flows to — unless you actually become totally delusional and disconnect from reality, and you stop seeing real things and allowing data to update your worldview — that means you anticipate yourself learning the skills you need to learn. It means you anticipate yourself working really hard. And it'll be easy to work hard because you anticipate the work paying off. Burnout is the opposite of this. Burnout is when you start failing to anticipate that your work has any meaning, or that will have an impact. If you don't anticipate that your work will drive change in the world, getting the motivation to work becomes almost impossible. It's like learned helplessness; you just can't do it.
SPENCER: I ran a poll the other day that I found really, really interesting and surprising. I asked people, "Regarding how much productive stuff you tend to accomplish in a day, do you most often: 1) get more done than you expected; 2) get the amount done that you expected; or 3) get less than you expected. Only 1% of people said more than expected and 72% of people said they get less done than expected. And what I find so fascinating about that is we're talking about daily activity. We're talking about something that people would have an abundant amount of evidence to update on. And yet, every day, day after day, 72% of people are getting less done than they expected.
EMMETT: When you think about people trying to drink water every day, you think people drink more water than expected, not as much water as expected, or exactly the amount of water as expected every day? Assuming that these are people who are thinking, "I need to get my eight cups of water a day." Some of them get as expected, but I bet almost all of them, the failure rate is almost on the, "I didn't get to my goal today." If you think about people with gym memberships or trying to eat healthy, do people exceed their expectations or miss their expectations? And the answer is, they miss consistently. And the reason for this is they're trying to shift their point of homeostasis. They're failing to anticipate being but they're wanting to be at a homeostasis point above where they are. They want to be harder working than they are and eating healthier than they are. but their actual point of homeostasis is underneath that. And so, they're constantly fighting against that gradient that is pulling them back down towards the homeostasis point. But the way you fight against it is you, at a higher level of abstraction, you're anticipating being above it and trying to climb the hill. That's willpower, I think, as long as the willpower exists. Of course, you're gonna consistently miss that point because whatever point you pick, you're going to be somewhere in between the gradient, the bottom of the gradient, and that point at the top. You're never going to overshoot it because that's what you're for. And you're coming up a hill, so you're never going to miss it on the upside. You'll probably miss it by some amount on the downside most of the time. And this is totally unsurprising as, of course, it's how it works. It couldn't work any other way.
SPENCER: You mentioned variance earlier, and it seems to me that's a kind of running theme and a bunch of what you write is about this idea that variance is not bad. Can you tell us more about how that plays into your worldview?
EMMETT: Yeah. Variance is not bad or good. As a first principle, sometimes more variance is bad. There's certain kinds of things where you want to minimize variance. Ball Bearings are a good example. Ball Bearings don't work if you allow variance. They require precision machining. But there's a lot of situations where it's bad if you remove variance because the thing you want is on the upside — if there's sort of this disproportionate returned peril off the top and your startup outcomes are like this — if you're investing in startups and your startups improve your median outcome but reduce your variance, that's horrible because all the returns are in the very top of the system. And so, improving your median is bad if it comes at the expense of reduced variance. And you think, "Well, why can't I reduce the negative variance without reducing the positive variance?" And the answer is, because a lot of problems have this structure, where what's generating the variance is exploring the space of novelty, it's trying new stuff. And almost by definition, when you try new stuff, you don't know if it'll work or not. And so, if you allow for weirdness, for novelty, for things that don't look right, they don't look like the thing you know will work, you're gonna get more failures, and you're gonna get more upside successes. And so, there often is this structure where you have to choose how much overall variance you want, and you can't get rid of the negative variance without getting rid of the positive variance. Sometimes you want neither positive nor negative variance; the ball bearings is an example of that. You don't always want variance, sometimes you want to get rid of both. And so, to take the startup investing example, there are certain kinds of negative variance that you can eliminate without affecting the top end. So, people who aren't willing to actually quit their jobs to start the startup. And they're like, "No, no, I'm gonna keep my full time job. I'm gonna keep doing the startup on the side." That is not a pattern. If you remove those people from the pool of people you fund, you remove a lot of negative variance, and very little positive variance. Because that's just a bad idea. It's a known bad idea. Doing known bad things causes negative variance, fine. But that's easy. Everyone does that already. That's a bad sign of a startup; everyone kind of knows that. It's a journey into the unknown stuff where you want to be careful. It's also like you sit in school with students, forcing everyone to advance at the same pace often helps the students at the bottom of the class by helping them keep up and holds back the students at the top of the class. And that's just sort of a choice you can make. That one, I think, is one of the interesting in-between cases: Do you care more about the median outcome? Or do you care more about the top outcome? And what's the point of school? Is school supposed to be moving the median up or improving the average? And where you weighted towards the people really exceeding very much on the top end? I don't know, I think that's a harder question. I think sometimes it's really unclear how much variance you want to accept.
SPENCER: For the last segment of the podcast, we could do a rapid fire round where I ask you a bunch of extremely difficult questions and you do your best to give short answers. How do you feel about that?
EMMETT: Hit me.
SPENCER: Okay. I have a lot of questions for you. So the first question is about twitch. I'm sure you get questions about Twitch all the time, as one of the founders of Twitch. What really surprised you about entrepreneurship, where things were really different than you thought they were going to be?
EMMETT: The most surprising thing about entrepreneurship for me was, the easy part was solving the problems. And the hard part was figuring out what problems are worth solving. I did not expect that coming in. I had this idea that it would be more like my CS degree where the hard part was making the thing work well, not figuring out what to make.
SPENCER: That's fascinating. I feel like somebody should write that down as a quote. It's been said that most successful startups have multiple points when they look like they're gonna fail. Did that happen to you at Twitch? And if you think this is a general pattern, why is it a pattern?
EMMETT: That definitely happened at Twitch. There were, I think, probably at least six or seven points where we were very close to death. Maybe most notably was, very early on, Justin and I each had to lend the company $15,000 that we'd earned from our last startup, which was almost all of our money that we had in the world, because we couldn't raise an angel round, and we had these bills and we couldn't pay them. We managed to bridge ourselves to raising a round, but it was a close thing. And we almost died again later when the 2008 crisis happened. We managed to raise a round, but then we're burning all this money and we couldn't raise any more money and had to get profitable, but we weren't profitable fast enough. I think it's normal for startups because startups are organizations designed to scale fast. A startup is a company designed to get much, much bigger very rapidly. That means a lot of change, and change means variance. And variance means potential death. Actually this is very deeply related to the variance question. Startups are a high variance situation because they are necessarily in a state of rapid change all the time.
SPENCER: Nowadays, there's way more standard startup advice for startup founders than there was when you started Twitch. I'm wondering, of the kind of common startup advice that's in the air, what's one piece of advice that you strongly disagree with?
EMMETT: Interesting. Almost all advice is great advice If you're the person who needs to hear that advice. Noticeable in idioms like, "Too many cooks spoil the broth," but "Two heads are better than one." Well, which one is it? Is it, you work with more than one person to solve a problem or too many people trying to solve the problem together makes it bad? And the answer is, well, sometimes one and sometimes the other. It depends on what mistake you happen to be making personally. One of the things we say at YC a lot to the startups is, "There is no good general sort of advice. You will not get from us general sort of advice. You will get from us several advice for your startup. And the advice we give someone else, don't try to copy the advice we give them directly, try to understand why we gave them the advice and think about whether or not it might apply to you or not." I try to think of advice I actually truly disagree with, in general, and not in specific. I think there are a few pieces of advice that I think are good advice broadly. One of the most notable ones is like, you should have a co-founder. It's not that not having a co-founder means you're doomed, but if at all possible, you should find a co-founder to run the company with. It vastly increases your chance of success. I think that's just generally true. I'm coming up blank on advice that is not, at least sometimes, correct.
SPENCER: It's funny you say that because it feels like Paul Graham has given so much advice through his essays that seem to be broad advice directed at startups widely. So how do you feel about that kind of advice?
EMMETT: If you read Paul's essays carefully when he gives the advice, the things he's saying are like these general principles. So, they're like the math sort of advice. Math is always true. It may not be applicable to your problem, actually, like this topology theorem is true. If you have a hole in something, no amount of morphing it under certain constraints will get rid of the hole. That's true. Do you have a thing with a hole in it? And so, a lot of Paul's advice, if you look at it carefully, is actually conditional implicitly on something. Some of it is not like, "Persistence is key. If you don't persist, you're probably not going to win." That is true. That is general advice. That's good general sort of advice, that you have to be persistent. And that if you give up, you're probably not going to succeed. Well, yeah, empirically, that seems to be so. I think Paul gives less truly general advice than people realize.
SPENCER: People often hear that they should, as a startup founder or someone creating a project, go talk to their users, go talk to the customers. And yet, many people who talk to the users and customers actually don't seem to learn the right thing. So, what's the right way to learn from your users or customers?
EMMETT: This is my favorite question. I have a YouTube video on this on product management. It's one of the things I speak to at YC, probably the most often. I give a talk on it. If you think about a startup trying to come up with great ideas for how to solve your users problems, how to get those promises into their hands, there's sort of two ways to talk to users fundamentally. One is, you have a great idea, and you go talk to users to validate it to make sure it's good before you build it. What you'll notice with that approach is, you've already had your great idea. And so, no matter how much you talk to the users, it's as great as it already is. No amount of talking to users can possibly change how good that idea was; it was as good as it was when you had it. That might help you not waste time on your bad ideas, but it cannot possibly make your idea better. The way you actually succeed in startups, though, is having really really good ideas, really having the right insight. And to do that, you have to do it in the opposite order. You have to do it in an open-ended way, where you don't yet know what your great idea is, what your solution is. You have an inkling of the space or the direction, but you don't have your idea yet, you don't know what to do, you're confused. Still, go talk to people and go understand them. Go perceive them accurately, build a deep model of how they see the world, what they actually need, and then have your great idea based on that. That actually might make your idea better than it was before. That is basically the difference between people who talk to their customers and actually learn things from them and then have good ideas versus not. Are they validating, are they testing their ideas or are they learning and are they changing?
SPENCER: When you have a new technology that comes out that gets people very excited, whether it's new blockchain technology or new AI technology, you get this huge number of new startups that start around that idea. I'm wondering, do you think that is a bad strategy for startup founders? Or do you think that there's something right about that?
EMMETT: Empirically, the big companies seem to generally ride some new technology. Not all of them. There's other ways to succeed at it. But when I go down the list of the big winning startups — like Google and Facebook or Microsoft or Apple or Amazon — they were 100% riding some wave: the Internet, personal computers, mobile phones (like Snapchat and Uber, right?) It's a really good strategy. It's not the only one, but it is like a big gold strike in some regions. It's just true, there's probably a lot of gold there. It's worth going to mine it.
SPENCER: How would you distinguish that from trend following?
EMMETT: This is where thinking from first principles is pretty important. Yes, it is a hopeful thought, a hopeful noticing like, starting in 2001, starting an internet company is probably a really good idea. There's probably a bunch of really big internet companies to go build, because this thing is going to be huge. And then, having an actual specific internet company idea. The first one followed through, at a very high level, sure, follow that trend by all means. That's not following, that's noticing a trend. It's like a real thing, that trend is really real. The trend you don't want to follow is the second order trend. This is what everyone else trying to build an internet company is doing. Therefore, I will do that too. Because remember, the median startup dies, probably the 80th percentile startup dies. So, copying the behavior of the people who are within one standard deviation of the median startup is a good way to die. Don't do that. That's trend following. But that doesn't mean don't build an internet thing. That's a deeper level of insight. How do you tell the difference between those two things? I don't know, be smart and have good judgment.
[promo]
SPENCER: So if I recall correctly, you were in literally the first Y Combinator batch. And I imagine Y Combinator was very different back then. In what ways have you seen Y Combinator get better? And in what ways do you think it's not as good as it was when you were in batch number one?
EMMETT: Weirdly, I think actually YC has changed less than people realize. I think, actually, there was a point where YC had changed somewhat in the sense that it had gotten much bigger, and we hadn't figured out sharding yet. And so, the batches were larger. And while you had a relationship with an individual partner, you didn't have a small set of startups that were sort of your startup friends that you were connected to. And I think that was a disadvantage at that point. But now, we figured out sharding. And so, when I'm a visiting partner at YC, there are about three other group partners who I work with, which is just about the same size that YC was when it got started. And we've got about 25 startups collectively. I think it's like 20 startups per group partner. And then with visiting group partners, maybe like 15. And that's like roughly the size YC was when it was, I think, at its height, which is not an accident. That's not the first batch of YC, which is honestly a little bit small, but batch two or three of YC, or four. The way that YC works now is to bunch of these, basically, like YC batch four running in parallel. And so, that gives you most of the advantages of early YC. I gotta be honest, I think I'm a great advisor and a great YC partner, I don't know that I'm quite as good as PG himself. I think I'm pretty good. But PG was truly special at it. And so, maybe a disadvantage is you don't get literally PG. But honestly, I think I'm pretty good. And I think Seibel for example, one of my co-founders, has been a YC partner for like 10 years. He actually may be as good as an advisor and as wise as PG was. They are different, very different, but he's really exceptional at it. And so, I actually think that, for the most part, YC is just better than it was before because you have access to this alumni network of thousands of startups. And at this point, there's probably over 10,000 alumni. You have access to partners who have seen every possible iteration of everything in every domain, like at least five times recently. The scale thing is actually very beneficial. You're part of this big network. We have an investor database now where you can keep track of which investors have treated startups well in the past and not. And it's just so powerful to be part of that huge accumulation of knowledge that was not there before. So, I really think YC is honestly better than when I went through it. The only downside is there's more people doing startups now than there was before. It's a more efficient market. I was fortunate to start at a point where the competition wasn't as fierce. I think that's the biggest thing that's been hard, at least for startups, harder now.
SPENCER: The first YC batch was truly incredible. It had you in it, Sam Altman, it had the Reddit folks. Was that just a coincidence that it's such an amazing first class, or was that on purpose? How did that happen?
EMMETT: Of course not. It was an amazing first class because the only reason you would apply for YC in 2005, which was not a thing a connected person, or a person who knew what they were doing would do, is because you were young, ambitious, and wanted to do startups, didn't know how, and we're like a big fan of Paul Graham's stuff. And Paul is a pretty good selector, has a pretty good sense of taste in just certain terms of who will make a good founder. And the cross of those two things of you're in it for the right reasons, like not for prestige, but because there was no prestige to go get, but because you really wanted to do the thing, and Paul being good at selecting was...basically, until YC started to get more prestigious, you have this incredible upside, where it's all people doing it for the right reasons. And I think, actually most people doing YC today are also doing it for the right reasons, and we try to select really hard — people who are doing it because they really want to build something and they really care about building something great, not because they want the YC stamp on their resume, because that's like getting the Harvard Business School stamp on your resume and it's like a prestige thing. But the hard part is, I think people often don't even know for themselves for those two motivations they really have. And the people who are in it for prestige are just aren't as good as the people who are in it for the right reasons.
SPENCER: You were already pretty well known, but you were recently known to a bunch more people because of the whole OpenAI debacle, where you ended up being CEO for, I think, what was it, two days?
EMMETT: I'd say two and a half, almost three.
SPENCER: So I'm wondering, what was that like for you just on an emotional level, being pulled into that and being CEO for two and a half days.
EMMETT: I decided to do it because it just seemed like one of two things was true: the Board had made a terrible mistake or the Board hadn't made a terrible mistake, or some other third unknown thing that I didn't even understand. But in both of the first two cases, me going in, knowing I didn't really want to be the CEO, I thought I could help. And in the third case, it's just so much unknown. Worst case, I could quit. And so, I was like, "Fine, this seems horrible but I will do it." It wasn't horrible. It was very stressful. I think that I was fortunate that I had 15 years of training of being a CEO in a very high-stress situation so that I could handle it without my emotional buffers getting totally wiped out. But, you know, I wouldn't say it was pleasant; it was exciting. And I think at the end, we navigated it to the best solution that was possible for where we were. In the abstract, there's always better outcomes. But I guess it goes back to the utilitarian thing, you're not designing the right outcome from first principles, you're looking for what's reachable from here. And I think, we got to effectively the best outcome that was reachable from the point where I came in. So, I feel good about it.
SPENCER: How did you approach getting up to speed on all this chaos happening around you in just a short time?
EMMETT: The main thing I was looking for was just trying to understand what was going on with the Board, what was going on with Sam, what was the situation I was in, and what was the company's best interest. So, when you come in as a CEO, your mandate is, fundamentally, as the leader of a company, to be a steward of that company's best interest. And I was trying to figure out what is OpenAI's best interest. And it sort of just rapidly became clear, unwinding what had just happened was in the company's best interest. I was just talking to people. So I was in the company talking to people who are close to the company, not gonna be on the Board talking to people, whoever I thought was the most informed or who had a new perspective, I tried to put together, sense-make the world quickly. I spent about a day doing that and then about a day and a half executing on the decision, more or less.
SPENCER: It was quoted in the news that the Board thought that Sam had misled them. In particular, I think the Board said that Sam was not [quote] "consistently candid in his communications, hindering its ability to exercise its responsibilities." There were also rumors circulated that Sam had maybe misled people about Tasha, one of the board members, saying that she wanted Helen and other board members to be removed. I don't want to ask you to comment on that in particular, but my question would be, suppose that this is what the Board believes, suppose the Board actually believed that Sam was misleading them in a way that compromised their ability to function, how do you think they should have handled that?
EMMETT: Coming out of it, I was actually a little surprised to learn, I assumed going in that this might not be true, that everyone involved was both doing their best and being mostly honest with the things they were saying. Basically, it might be that everything is that if people didn't feel like they could reveal everything about what was going on, but to the degree possible, everyone was saying their real reasons and their real thoughts for doing things. That doesn't mean they didn't make a mistake. People can make mistakes while being honest all the time. But, I don't think anyone was acting out of anything other than the reasons they said they were acting out of them. Did the Board make an error in how they executed? I think it's easy to second-guess someone in retrospect. Because I would say like, in retrospect, obviously, it didn't work out super good. The way that what happened was not, I think, what anyone probably intended. But, could you do better in the moment? You know, it's easy to say, "Oh, yeah, totally." It's easy to say, "Oh, I would have done this, I would have done that." You weren't there. You didn't have the information and the lack of information that they had. You don't know. And I just like, I don't know. I don't really know. Maybe they made the best, the correct mistake, given the information. The correct mistake — that's exactly what I put it. Maybe they made the correct mistake, given the situation they were in.
SPENCER: It's interesting to hear you say that because so many people have criticized the Board, assuming that they must have made a dumb move. But on the other hand, that's a little bit like a poker player going all in and then everyone being like, "What an idiot, their bluff got called, and they lost everything." Well, maybe that's actually how to move given the information that the poker player had, right?
EMMETT: I think, clearly, knowing everything with the benefit of hindsight, yeah, obviously they fucked up, obviously it was a mistake. You don't wind up unwinding something if it isn't a mistake at some level. That can't possibly have been the right outcome or the right way to go about it, from retrospect. But, again, it's so easy to say stuff like that in retrospect. I refuse to make that kind of judgment, because I just don't know, you weren't there.
SPENCER: What do you think of the Effective Altruism movement?
EMMETT: I think it sounds like a good idea and someone should try it.
SPENCER: That's a hilarious response.
EMMETT: I'm paraphrasing Gandhi and Western civilization, right? Gandhi was asked about, "What do you think of Western civilization?" And he thinks, "I think it sounds like a good idea! They should go try that!" At its heart, what I understood effective altruism to be when I first encountered it, is a great idea. It's of course true and almost inarguable, which is, when you're giving money to try to do good in the world, you should care about whether or not who you give the money to, or what you're giving the money to works, that impact matters. That should be important. And you should do your best to try to estimate that either quantitatively or qualitatively. But I think there was a whole movement that started around that of people who had a specific belief about sort of this consequentialist utilitarian thing that I very much disagree with, that we can figure out the right things to do and people who are better at understanding that should be deferred to and that there's a right answer, that is context free. And I just don't believe in context free answers, in general. And so, I think there's an "ends justify the means" thing that crept in, that all people involved, I think, are almost definitionally well-meaning. Sometimes it seems almost pathologically scrupulous about wanting to do the right thing. And yet, I think that's not necessarily a protection against making some pretty big mistakes. I would summarize it as like the people who are doing effective altruism, motivated from a place of love, almost all seem to do good work. And the people doing it or motivated from a place of fear seem to do mostly destructive things. And that surprises me not at all because it's really hard to do good things in the world coming out of a place of fear all the time.
SPENCER: There's somewhat of a split in effective altruism movement between the kind of very near-term evidence focused work like, "Okay, let's look at the evidence on lots of different charities about how we can save a life. And if you look at all the evidence, it seems like something like Against Malaria Foundation looks really, really good in terms of the dollar cost of saving an affected life, versus these big, philosophical long-term focused ideas of like, "Okay, well, maybe the biggest thing that's ever going to happen in the next 50 years has to do with whether AI kills us all or bioterrorism kills most of humanity, or nuclear weapons go out of control. And so, maybe the highest impact thing is something in that area, something around the fate of civilization." And I'm curious whether you think one of those two perspectives is more right or more aligned with the way you think than the other.
EMMETT: They're both wrong in different ways and both worthy thoughts in another way. The malaria bed nets thing is the classic like, drunk looking for his keys under the streetlight and someone comes up, he's like, "Oh, can I help you with your keys?" And they're like, "Yeah, totally." And they're like, "Okay, well, where'd you lose them?" Like, "Oh, out in the cornfield over there." "Why are you looking for them under the light?" "Well, it's dark there, I can't see them." It's a little unfair. There are keys to be found underneath the spotlight of quantifiable, measurable impact. And it is good work to go do that. But like most good that can be done in the world, unfortunately, is not easily measurable by randomized controlled trial or highly measurable, highly quantified, very trustworthy impact statements. To the degree we find good to be done on those, we should fund that stuff. Of course, it's like a good idea. But, it's not going to be the best stuff. The universe is, unfortunately, not so kind that the best courses of action are the ones that are lowest variance. Actually, let me come back to the variance thing again. You're reducing the variance on your giving by insisting on high measurability, because you know for sure you're having this impact. It's not that doing that kind of low variance giving is bad, it's just, obviously, the highest impact stuff is going to be more leveraged than that. And it's also going to be impossible to predict, probably non-repeatable a lot of the time, and so, sure fund the fucking bednets. But, that's not going to be THE answer. It's just AN answer. And I'd say on the other side of it, the "Oh, but isn't it more important to go after nuclear risks and stuff like that, or AI risk or whatever?" More important is the problem. That idea that you can rank all the things by importance and that you could know, in a global sense, which of these things is most important to work on, like what is most important for you to do is contextual to you. It's driven by where you will be able to have the best impact, which is partly about the problem, but also partly about you, and where you live, and what you know, and what you're connected to. And if you care about one of these, you think you have an inclination that there's a big risk over there, learning more about that and growing in that direction might be a good idea. But, the world is full of unknowns. To think that you'll have THE correct answer is like, "No, you won't. You'll only not know THE correct answer, you won't even have a full ranking. You'll just have a bunch of ideas of stuff where your estimates all overlap each other and have high variance. And you know that there's a bunch of other things you could be doing that you haven't even considered yet, that are probably better than that. And you don't know the payoff curves. And so, at the end of the day, this idea of "Our step one is we should fully understand things, and then do the optimal thing" is how you get analysis paralysis. Or how you, in order to get out of analysis paralysis, insist to yourself, "We have found the correct answer: AI x-risk is the most important thing. That is all I'll devote my life to because nothing else is nearly as important because that's the thing." And like, maybe, maybe not. How do you know? You don't know. You can't possibly know, because the world is complicated. And so, it's a worthy question: What's the most important risk facing humanity? What's the most important thing I can be working on? What's the most highly measurable impact that I can have? But you know, the charitable work I've done that probably had the biggest impact in terms of leveraged impact has always been opportunistic. There's a person in a place, and I know because of who I am and who they are that I can trust them and this is a unique opportunity, and I have an asymmetric information advantage. And I am going to act fast with less oversight and give them money to make it happen more quickly. And that's not replicable. I can't give that to you as another opportunity to go do because most high impact things don't look that way. Look at raising money for a startup. Maybe one of the highest impact things you could have done was to invest money in YouTube because YouTube has created this massive amount of impact in terms of people's ability to learn new skills or whatever. Or donate money to Wikipedia earlier or something. But that's not replicable. Once it's done, it's done. You need to figure out the next thing. But the next thing is this big unknown. And what people want is a place to put money so they can buy indulgences, so they can feel less guilty. And unfortunately, that's not a thing that exists. No one is printing indulgences that you can just just give money here and you have done your part. That's, unfortunately, just not how it works. Do your best. Do enough. That's good. I love that Giving What We Can pledge. I think that's a hugely beneficial idea like, "Hey, what if we all just took 10% and we said that was enough?" That would actually be way more than people give today. And it would also be enough, I think, if we all did it. Then people could stop beating themselves up over not feeling guilty about not doing enough, which I think is acting from a fear of "I am not good enough." That's one of the most dangerous things you can do.
SPENCER: Last two questions for you. Why is the city of San Francisco such a shit show?
EMMETT: Because we are a variance embracing city. And it runs through us, top to bottom, we love the weird, we accept the weird, we are hesitant to impose on people's autonomy. And therefore, we will have some of the most amazing things in the world start here and be made here. And some of the most amazing people in the world do amazing things here. And we will also have really bad outcomes and people do bad stuff and bad things happen. And I think it is a mistake to think we will ever manage to get one without the other. I think they're very hyper specific, like because of Prop 13, and the specific political beds of San Francisco, we've run up with a leftist, reactionary homeowner Alliance, that has prevented housing from being built in the city that is at the root of this vast number of other seeming other problems. So I think there's a few places where you'd have some own goals, where you can like, yeah, we could actually just do straight up better. There's no weirdness trade off there. We should just let people build stuff. But even if we did that, it would still be a shit show in a bunch of ways, and that's probably not going away.
SPENCER: Alright, last question. How do you think society should handle potential future dangers from artificial intelligence?
EMMETT: I think the most important thing to do with AI danger is to separate two separate dangers. One is: there's a new technology being built that lets you build tools that are more powerful than prior tools to accomplish interesting things. There, we should do the same that we generally do with technology regulation, which is wait for actual harms to arise and then act to prevent those harms as necessary. I realized that's a very general solution. But the key part of it is, wait till you see what the actual harm is, because guessing what the harms will be is really hard. And you won't get it right. And so, it's much more about waiting for the actual harm, and we being responsive, and hopefully not reactive to that. There's this other subject of harm from Ai, which is what happens when you make an AI that's smart enough to build another AI, using its own source code, or whatever, that is slightly smarter than it, and is capable of building yet another smarter AI. What happens when it becomes a self-sustaining AI that can build a better AI. That is one of those things where there's no clear end point. It's sort of like a critical nuclear reaction, there's no clear end point to that process. And if that self-improving thing is sort of aimed in the wrong direction, it can be very dangerous, like dangerous at the scale of end-of-the-world dangerous. Therefore, we need to be paying attention to everyone who is building an AI thing and sort of checking for things like, "Are you getting close to that particular point?" And I don't think we even need to know what we would do in advance about it if they got there. I don't think anyone is actually right on the cusp today. But, we're close enough that we should be making sure that if you're gonna work on that, you should be registering it somewhere. And we should be keeping track of your progress. There should be some way of like, effectively, just verifying and testing, as you build as you build each thing, how close are you to this point of self-improving recursive self-improvement? And as we get closer to that — which I think we're quite far, to be honest — but as we get closer to that, we should be aware, and we should figure out what to do then, which is probably something along the lines of slow down or stop unless we have a really, really good reason that we believe that we understand the dynamics of what will happen after we crossed the threshold. That's the other big thing I would say we should be doing — sort of a sensor network. We should be paying attention and knowing and noticing on the self-improving side. And then, we should just be waiting for actual harms to arise from this technology, or every other technology we have, and reacting as necessary.
SPENCER: Emmet, thanks so much for coming on, it's a great conversation.
EMMETT: Yep, thanks for having me.
[outro]
JOSH: A listener asks: "What do you do with the gathered data from Clearer Thinking?"
SPENCER: We have so much data, it's kind of remarkable; but we don't always have the time to do anything interesting with it. But what we do from time to time is we'll analyze it to try to (1) improve our tools; so it's really, really useful having some of this data. And (2) we'll use it to try to gain insights that we didn't know. And then sometimes these insights will become the content for new newsletters or blog posts. So we basically try to use the data to learn. That's it. We don't sell the data or anything like that.
Staff
Music
Affiliates
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of these platforms:
Apple Podcasts Spotify TuneIn Amazon Podurama Podcast Addict YouTube RSS
We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:
Or connect with us on social media: