CLEARER THINKING

with Spencer Greenberg
the podcast about ideas that matter

Episode 087: Are you a wamb or a nerd? (with Tom Chivers)

Enjoying the episode? Want to listen later? Subscribe on any of these apps or stores to be notified when we release new episodes:

January 13, 2022

What is a "wamb"? What are the differences between wambs and nerds? When is it appropriate (or not) to decouple concepts from their context? What are some common characteristics of miscommunications between journalists and writers / thinkers in the EA and Rationalist communities? What are "crony" beliefs? How can you approach discussions of controversial topics without immediately getting labelled as being on one team or another? What sorts of quirks do members of the EA and Rationalist communities typically exhibit in social contexts?

Tom is a freelance science writer and the science editor at UnHerd.com. He has twice been awarded a Royal Statistical Society "statistical excellence in journalism" prize, in 2018 and 2020, and was declared the science writer of the year by the Association of British Science Writers in 2021. His first book, The Rationalist's Guide to the Galaxy: Superintelligent AI and the Geeks Who are Trying to Solve Humanity's Future (originally titled The AI Does Not Hate You), was declared one of the Times's science books of 2019. He worked for seven years at the Telegraph and three years at BuzzFeed before going freelance in 2018, and was once described by Sir Terry Pratchett as "far too nice to be a journalist". Find out more about Tom on Twitter, UnHerd, and tomchivers.com.

JOSH: Hello, and welcome to Clearer Thinking with Spencer Greenberg, the podcast about ideas that matter. I'm Josh Castle, the producer of the podcast and I'm so glad you've joined us today. In this episode, Spencer speaks with Tom Chivers about the nerd wamb distinction, crony beliefs, and perspectives on the rationalist community.

SPENCER: Tom, welcome. It's great to have you on.

TOM: Thank you very much, good to be here.

SPENCER: So sometimes with this podcast, I tried to make the whole episode have a theme. But today's gonna be a little different, we're really going to do a grab bag of interesting fun ideas, and probably won't be able to tie them all together to a nice thread, but hopefully, we'll still be engaging. And I hope also to kind of dig into some of these topics, or even if people have heard a bit about them before, hopefully, it will go deeper than the normal conversation about them.

TOM: Okay, sounds great.

SPENCER: Awesome. So the first topic, do you want to tell us what a wamb is? Am I even pronouncing that right?

TOM: I think so. Okay, so it's a made-up word, right. So who knows, and it's made up word by a guy who's actually Swedish. And therefore, God knows how he decided to pronounce it. But it's felt W-A-M-B. So this was a survey of a guy called John Nurst so I'm sure a lot of your listeners will be familiar with if there's a lot of people who listen from a sort of rationalist AI community. And he's the author of a blog called Everything Studies. And he wrote this blog post a while ago, called the nerd as the norm. I really envy him, because he's so good at coming up with concepts that make sense of a little idea in the world that you know, they put names on something that I hadn't thought of. So yes, you're right, that didn't need a name, that did need a sort of concept to explain things for me. What he pointed out was that we have this word nerd, which describes a sort of subset of people with a certain set of characteristics, psychological traits, you know, and he lists from an article that he's reading. And this is referring to listening to an interest in things and ideas over people, a concern for correctness over social harmony, oblivious to or disregard for social norms and expectation – nerdiness, right? I am probably a nerd, I suspect you are a bit of a nerd. I expect a lot of people to identify as nerds. And I guess it's sort of the systematizing versus empathizing brain. But he points out, you know, there's this idea that often when people write and talk about nerds, that there's sort of, there are normal people, and then there are nerds. Then there's this sort of spectrum on the nerd side, which filters off in sort of popular imaginations sort of autism or Asperger's Syndrome. But on the other side is just normal people. And he says, well, obviously, that's not how it is, there's a bell curve of people and people on the other, the left-hand side of the bell curve, it will have traits in lesser degree and not curve off in the same amount. And you know, it sort of turns it around, and sort of people would instead have an interest in people over things and ideas and a concern for social harmony over correctness and stronger, more emotional expression. This is sort of anti-nerd and he felt there needs to be a word for this. And he said that the word wamb – it's how I choose to pronounce it W-A-M-B to rhyme with lamb has this sort of gooey, shapeless feel, whereas nerd sounds nerdy and sort of prickly, intense, so I quite liked it. And I thought it sort of filled a sort of conceptual gap that I felt was already always there and therefore has become a really useful concept for me in a lot of ways.

SPENCER: Yeah, I love that idea. First of all, one thing I think is really nice about it is it takes this kind of dichotomy of nerd versus normal, and says, “Oh, no, let's actually build this into a spectrum. And then let's consider like how people vary on the spectrum.” And I think that really adds a lot of insight to the concept. But also another thing about it, I think, is interesting is it goes from defining nerd as like the weird group, and then everyone else is like the normal group to saying, “Oh, no, there's just natural variation along the spectrum.” And there are probably advantages and disadvantages to being on either side, rather than kind of defining it in terms of bad versus good, essentially.

TOM: Yes, exactly. And, you know, the problem I have is, as a nerd, I find myself wanting to say an awful lot. But that doesn't make sense. That's not true when there are some facts that we're all agreeing about or disagreeing about on the internet or something like that. There was an example that I saw years ago – there was a study in Britain, there was this big uproar, which said, you said, someone said that the Tory government had killed 60,000 people had died within a few days of being declared fit to work by the government's disability assessment group, you know, so they say, “Well, we think this person is fit to work.” And 10s of 1000s people had died within six weeks of that date. And this got people so angry and so upset, and I immediately sort of thought, well, this doesn't feel right. To me, that's a huge number. You know, it's an absolutely huge number. I can't believe that, that's as straightforward as that. And so I sort of looked into it. And it turned out that was in six weeks, either side, a lot of these peoples had died, and then been declared that they didn't need to have their disability benefit anymore because they were dead. So we saw actually, which is the system working quite well, right, you know, and people's, I sort of pointed this out because I felt it was important, but people saying, “Look, we're quibbling over numbers, does it matter? What, even one death is too many?” And I sort of want to say no, it's not, we can be correct by this. There is the emotional response and there's pretty dire is the correct numbers. And I feel like there are times when the right attitude is to feel the emotional response and get involved. There's also a really important part for nerdily assessing the numbers and having this attitude towards correctness over social harmony and correctness over being part of the group. Now that there's this word, wamb, which allows me to say, generally behind people's backs. I have to admit this though, “He's just being a wamb about it,” like you might say, he's being a bit of a nerd, it feels like you can express that that attitude can have its downfalls, and its downsides, as well as the overly nerdy odour. I'm going to worry too much about whether it's 1.8 or 1.9. So I feel like that it was really useful for me to have this sort of concept fleshed out for me, you know, I felt it really sort of put a label on something I felt needed to be sort of called out of the ether. I also felt like another idea of Nurst and this is what I mean about that him being really useful as having a really brilliant way of sort of coming up with useful concepts was that there was this sort of dichotomy he made between people who are good at decoupling concepts and people who weren't good at decoupling concepts. And by decoupling, I'm sure have you come across this?

SPENCER: Yeah, I think it's a really important idea. Did he originate that?

TOM: It is an existing psychological idea, but he lifted the idea from psychological theory, but the psychological theory was further away from the way we now use it as someone who can say I, the example that came up, actually, when I wrote about this concept for the website, right for Unheard, and it came up in the concept of Richard Dawkins talking about eugenics. And Richard Dawkins said, you know, “It's one thing to say that eugenics is morally unacceptable. It's another thing to say that eugenics wouldn't work.” As someone who is good at or is very capable of decoupling concepts from each other, it's easy to decouple the idea that something might work from the idea that it is a good idea to do it to people who are less willing to or less able to do couple, they bring the whole context with it, you know, they sort of say that eugenics works, they hear the whole history of eugenics. They hear the whole idea of, you know, Nazi experiments and awful, racist, and unpleasant history of it, and they can sort of associate these two ideas together, and they can't extricate it or don't want to extricate it from its history. And I feel that it's very connected to this nerd wamb distinction. You know, there's something nerdy about saying, “Well, when I say X, I don't mean Y. You know, and then decoupling these ideas that sort of for nerdy people that counts as a sort of magic ritual, or we say, okay, fine, here's isolated, the idea of eugenics working on its own. Yeah, to isolate the idea of whether eugenics works from the separate idea of whether it is a good idea whether it is morally okay. And I feel like that is very related to these to this nerd-wamb distinction. And I think it's really, you know, as someone who is nerdy and given to this decoupling idea, I sympathize an awful lot with the sort of people like Richard Dawkins, who do that sort of thing more easily. But on the other hand, I don't want to say that the alternative way of looking at the world is wrong. And I think there is great value to being able to sort of remember the context of things and to be socially aware. And the example I gave in one of my pieces was, you know, that there's great value, and we are social animals, and being able to understand the society around us. To say, for instance – and my children, they're young, they haven't worked out social morals yet, and they will sometimes say things like, “Look, Daddy, isn't that man enormously fat?” And you think, “Well, man, that is true, and I don't know, as a nerd, I respect your interest in telling the truth over social harmony. But at this particular point, I would like you to do a bit more towards social harmony by not saying that, please.” So there is a huge value in the, you know, the (8:23) wamby, low decoupling side of things. But I do think until these concepts are pointed out, I didn't really have a way of thinking about them. So I found it really useful to have this dichotomy, the separation of these two ideas sort of pointed out for me.

SPENCER: I think it's a really useful distinction. I'll just give an example that maybe will illustrate it. Suppose that someone's doing research on really hardworking leaders, right? Very nerdy person includes Hitler on their last, right? They might say, “Well, yeah, Hitler, he seemed like he was hardworking.” I actually don't know if he's hardworking. But let's suppose.

TOM: Let's imagine he was, right.

SPENCER: And then a wamby person would be like, What the hell, you can't like put Hitler on your list of hardworking people, like, “Don't you realize, like all the sort of social and political context around this and how offensive that is, and so on”. And a nerd would be like, “Well, all it matters is whether it was hard working, I'm not saying he was good. I'm just saying he was hard working.” Right? And so they're kind of reached decoupling this sort of historical contacts and ethics and so on, and just considering a question narrowly. And I think what tends to happen, is that nerds tend to be better at analyzing an existing structure, and just saying, “Okay, what are the pros and cons of this structure?” Whereas more wamby, people tend to be better at saying, well, you're kind of missing the bigger picture here of like, sort of the right and wrong and what matters here. Getting into the kind of technical details of whether Hitler is hardworking or not, is sort of missing the point.

TOM: Yeah, the Hitler example is a really good one, because I remember, I think it was Philip Tetlock and he said, you know, the Wehrmacht, the Nazi army, as an example of an organization that was well run and had efficient at achieving its goals, or something like that. And he said, like people who can think in those terms, they can say, look, yes, obviously, the Wehrmacht was pretty obviously bad. But we can learn things about it as an effective fighting machine, which obviously, it was for the the first few years of the war. They are able to separate the valuable lessons that you can take from it from the wider context of being dreadful. I think, you know, people like superforecast is a concept I don't imagine I need to explain on this podcast. They are traditionally very good at sort of separating the idea of whether or not it is something to be admired, or whether it is something that it's something that from which you can learn lessons, and I think that this all gets down this fault line of a nerd-wamb, hide a couple of loaded coupler idea of people who are happier taking things out of their historical and social context and people who, for whom a whole great wash of emotional attachments comes with it and are very uncomfortable with saying, for instance, the Wehrmacht was very good at maintaining its supply lines, or whatever specific positives they had about them. So yes, I absolutely agree that the Hitler example you gave is exactly right, it is about your ability to separate out the specific subsets of other things that you know, to break things down into parts and say, look at this part, ignore the rest of it. And I think it's a really useful skill. And I think I suppose the ideal person would be someone who can nerdily do that, and wambly be aware of the wider contexts, you know, but I don't know how common that sort of ideal person actually is.

SPENCER: By being able to kind of put on one of the frames temporarily, take it off, put on the other frame, and they're gonna flip between them as useful. You know, I feel like a really good example of this is Peter Singer. I don't know how familiar you are with him.

TOM: Big fan. Yeah.

SPENCER: Yeah. So he'll do things, I don't know if this is something he exactly says they'll say things like this, he'll analyze, let's say, a baby just before it's born just after it's born. And he'll talk about how people will view it as ethically very different just before its members just after, but it's actually really hard to justify, you know if you think it's okay to kill it just before it's born, why can you kill it just after? And people will get really upset at this kind of reasoning? But he's just sort of making a super technical, philosophical argument about pointing out that it's like hard to draw these distinctions. Or they'll say, why is it you're okay with eating animals, but not okay with killing a person who's so mentally impaired, that their brain capacity is actually less than, let's say, a dog, right? And people again, will get really, really upset about this. And, you know, I feel like the way he analyzes it is sort of on the extreme nerd side of this, and then the typical reaction is coming from the wamb side. What do you think about that?

TOM: I think it's a brilliant example, actually. And it's one that I have repeatedly thought, I quite often see people scribing Peter Singer is a guy who advocates the killing of disabled children or something like that.

SPENCER: It's almost the opposite of what's true. He's saying we shouldn't kill animals.

TOM: Yes, exactly. He's doing a thing, which I really admire and a philosopher of following a chain of thought all the way right, you know, then saying, “Do we assign moral value because of how conscious or intelligent a creature is?” If we do, then why do we assign more moral value to a mentally severely disabled person who is, I think it'd be fair to say essentially, brain dead in many cases over, for example, a pig, which is, as far as we can tell, is extremely intelligent and highly conscious, and yet we have very few qualms about eating bacon? I mean, funnily enough, I got a sort of rail, but there was an objectively kind of hilarious series of tweets from a woman on the internet not long ago about her fantasies about having sex with horses. And I wrote a somewhat I hope, funny piece about you know, actually, I don't I don't know how you justify saying that is immoral when essentially breeding and torturing and murdering them for meat is not immoral, rather. I don't know how you can construct a stable system of ethics of any sort, which says, yes, it's fine to torture and kill them for food, but it's not alright to, if they wish to have sex with you. It's a bit of a strange concept you do you and all that, but you know, but it is really hard for people to sort of justifying it in any sort of ethical way. But the point is, they don't have to go “That is weird and wrong.” I associate that with badness, and I associate eating bacon with goodness, and I don't have to follow the chain of reasoning through any further whereas people like Peter Singer and I think less successfully, but sometimes Richard Dawkins do is they do follow the chain of reasoning and, and to some degree except where it goes. And rather than veering off when it gets to the uncomfortable, now we're in a weird situation thinking to my mind bravely follow it all the way and sometimes it will get them in all sorts of trouble because people will then take a single line out of one of their books or one of the tweets and put that all over the internet and that won't look good for them.

SPENCER: I think the extreme way I'm interpreting of what you just said, is that you're in favor of having sex with horses, is that right?

TOM: Oh, yeah, completely. Yeah, no, absolutely. I mean, only at Christmas and Easter, you know, but then. So now, just for clarity, that is not what I'm saying. But I do. I do think it is very hard to construct a coherent ethical philosophy in which it is all right to murder and torture animals for food and it is not all right, but then yes, it gets it gets a little strange. Thinking about it. So perhaps I should quietly move on.

SPENCER: But you know, maybe weirdness seems like another nerd versus wamb distinction like, nerds are much more okay with doing things that are generally considered really weird. Now weird in like a, this is a totally socially acceptable form of quirkiness, or I'm weird, like I'm goth, which is like a standard accepted for weirdness that has its own like a subculture. It's like, you just have some weird belief that like very few other people have, right. Whereas wamb just kind of looked down on that.

TOM: Yeah, exactly. I think that's actually true. And I think there's, again, the rationalist community, I find it extremely interesting that because the idea of AI taking over the world, which is not central, but a commonly worried about taking over the world is wrong. A fair bit of the AI alignment problem, which I'm sure again, you guys perfectly don't need to

SPENCER: We've had a couple of episodes on that.

TOM: Yeah, yeah, that is such a classic nerdy thing, because if you take some of the core ideas, which are that computers go wrong in ways and but basically following your instructions, extremely literally, and those extra instructions not being what you thought they were, and that AI is getting more powerful, and around the corner, and we'll be in charge of loads of important things and we'll probably see Artificial General Intelligence in my lifetime, or if not my children's lifetime, these are all those concepts individually, not very hard to grasp, or particularly or easy to argue with. But the idea that that will then lead to some sort of massive societal disaster, feels like science fiction. So people go, “That's weird.” Not going to think about that, obviously, not that we and we will worry about more sort of near term or easier to conceptualize problems, which are often very real problems like climate change, or racism. But they do just sort of they go whoa, feels a bit weird when we're talking about isn't that the plot of the Terminator bit weird nerd, so they then steer away from it. But actually, the rational community in this thing I really admire about them is they know, they follow that chain of reasoning all the way through. They say, actually, if AGI is relatively near term, and computers can go wrong in these dramatic ways, and AGI will be a computer and will try to fulfill its utility function, then there are obvious ways in which you can go wrong. And we should probably have some smart guys thinking about that, you know. You need people like that in the world following and they'll be wrong. A lot of the time, though, maybe it'll turn out that actually, for some unforeseen reason, AGI won't be a problem, and it'll be fine. Or maybe they'll it just needed a bit of fixing because we're some clever people, think about it. If you don't have people who are quite weird, and willing to follow chains of reasoning all the way through and who are concerned with things being correct, rather than worrying about whether people will think they're weird for thinking them, then you end up missing some big problems, I think.

SPENCER: Yeah, it's really an interesting point. I would add something I just thought of now, it's kind of surprising to me. So not only is it that nerds tend to take things like superintelligence much more seriously than wambs. But there's almost some sense in which nerds are modeling superintelligence as the extreme nerd. Whereas a way I might be modeling a superintelligence as more like a way, you know, oftentimes this sort of gut reaction, people will not deepen this when you talk about superintelligence. Or like, well, if it's so smart, it will just know what to do. It will just not do bad things, you know. It will kind of like have intuition or something like this, that it shouldn't destroy the world just to make tea or something like that, right? Whereas nerds are like, “No, no, but if you tell it to make tea, how does it know not to destroy the world?” Like, you have to specify that precisely, it's almost like you're thinking about superintelligence as the most extreme nerd that has absolutely no ability to infer anything other than sort of the literal meaning of exactly what it's been told.

TOM: Yeah, exactly that. And I mean, it may well be that it would be perfectly capable of knowing that that's not what we wanted it to do. But we haven't told it to care about those other things. It's this idea to sort of being able to sort of separate out the idea of intelligence from my guests, or wider idea of human intelligence, or wisdom or sort of morality. This idea of the ability to solve problems is a really narrow form of intelligence. And again, it's separating out that concept from wider concepts. Again, I'm a nerd, I should declare my interest right there, I'm talking about my people, but I find a nerdy way of thinking much more accessible sometimes. And I also find that it is actually really useful in society to have people doing this. And so I think there's sort of pointing out this distinction between a nerdiness and a wambiness and this or not, I'm not saying that it's just nerds versus the rest of us or the rest of them, I suppose, I should say, there is a spectrum which ends up in some people being pathologically nerdy one end and then but being not, you know, normal, but within the 95% confidence intervals, I suppose it will be in within the center of the bell curve, and then people being pathologically wamby on the other end, I, you know, I feel like gives me a much clearer picture of what the distribution of human traits is actually like.

[promo]

SPENCER: You mentioned the rationalist and effective altruist communities. And we have quite a lot of listeners from both of those communities. So I'm actually really interested to know how you got so interested in them that you decided to write a book about them. Do you want to tell us about that?

TOM: Yeah, sure, sure. So, I mean, it goes back a long way, actually, now because I'm so old, and everything in my youth was so long ago. But yeah, so it was about 2013 or something, was it that Nick Bostrom's book, Superintelligence came out, and I had never previously considered AI risk and all these things. But I read the book for review for The Telegraph, which is where I worked at the time. And I mean, essentially, I think unusually among reviewers of the book, I understood what Nick Bostrom was getting at – this idea of an alignment problem that I wouldn't do what you told it very precisely, not what you wanted to do. And these two things are very different. Like, I think there were a lot of people who reviewed that book without understanding that point, and sort of started making jokes about Skynet, a lot of reviews illustrated with pictures of the Terminator. In fact, I have a horrible feeling my own review was terminated, but that wasn't my fault. And after I wrote it, Paul Crowley, he's a member of Google. He got in touch, just emailed, and said, “You got that.” That was not dreadful.

SPENCER: High praise.

TOM: Yes. Yes, exactly. Yeah, that was literally was, right. It's a complicated thing.

SPENCER: And you're probably one, throughout the years, that did. So that's impressive.

TOM: Yeah, well, exactly. And I was a journalist with a philosophy degree. Probably people's hopes for me are not super high, right? Yeah. So and after that, I think Paul suggested I read a few things, I think I think I'd probably stumbled across, I think Eliezer's “Zombies! Zombies?” article, the one about David Chalmers on consciousness. And I found that fascinating, so robustly common sense, sort of like we were just like this implies this, I found it sort of really solid and robust. And I really enjoyed it. So I read some of that and then found myself reading Slate Star Codex. And, you know, eventually, I think a couple of years after, though, in sort of 2015, sort of period, I've read the sequences. And again, I just like that this is taking not that controversial concepts, but following the reasoning through, you know, this, two really interesting ideas about human rationality, I really liked the sort of pulling a thread on a jump, you know, if you pull a thread on your jumper, and just to sort of tweak it out, and then you end up pulling the entire jumper apart, I felt like sequences had this marvelous thing like that. So I need to explain why artificial intelligence is a problem. But to do that, I have to explain what intelligence is. And I have to explain why it's different from human intelligence. And then I'll do I have to explain the entirety of it.

SPENCER: I don't know how quantum mechanics gets in there, but somehow it does. Yeah.

TOM: I know, exactly. But it just has this marvelous spiraling out of control. No, I'm gonna have to actually have to build an entire worldview from the ground up. I found that really sort of endearing. And then around that time, 2016, I think, when I was just getting pretty obsessed with sequences, and in about 2016, when I was really at peak obsession with reading all the entire back catalog of Slate Star Codex and think had finished the sequences a few months before and just sort of getting lost in the whole blogosphere, AlphaGo defeated Lee Sedol in that fascinating match. And I wrote a piece about it for Buzzfeed, which is where I was then working. And I thought, and I remember that Eliezer, had written a really interesting thing on Facebook, sort of saying, look that actually go isn't, you know, orders of magnitude more interesting than chess, from the point of view of AI from a sort of generality point of view that requires much more intuition. It's not something you can learn many of things by rote, it's not something you can do brute force calculations on. And it is a really interesting indicator of AI progress. And so I wrote a piece about AlphaGo. And why it was so fascinating and included a decent chunk, this idea that it is an indicator of AI, taking real strides towards generality in the future. I've been treating Eliezer still agrees with that position, but certainly, my own watching of DeepMind does make me think they have people who've come closest and are doing the most interesting things. After that, it becomes a very boring story about publishing in which a literary agent who had liked my writing for a while had taken me out for various nice lunches in central London and said, “You should write a book.” “What should you write a book about?” “Write a book.” I want you to write a book, you should write a book. And after I wrote that, to that AlphaGo one, he said you should probably write about these rationalists, they sound fascinating. And I was like, “Well, yeah, actually, I've downloaded quite a lot of information about them into my brain. So I can do a lot of that easily.” And yeah, we put together a proposal and pitched it, and then I flew out to California and sort of sat in a grim Airbnb above a nightclub in Berkeley and tried to get people to let me interview them. And a few people were kind enough to do that. And I think it worked out all right, like I think, I don't know if there's absolutely no reason why you would have read the book, and especially since it's not even yet available in America. But the people who have read it from the rationalist community, I think, broadly feel it was a fair portrayal of them. There's one story I don't know if you know, this, the Slate Star Codex subreddit, before it was published when I was doing the interviews and things. I popped up under my real name rather than my top-secret nerdy identity for putting my Warhammer models on. I said, “I'm Tom Chivers, I'm a science writer. I'm doing this book about the reference community, ask me anything.” People asked me about it, and people suggested they come along to meet us at various meetups, but one guy said, Look, this will be a hatchet job. “Don't get involved. Don't tag rationalists. Don't get involved. This will be the mainstream media making you out to look weird.” No. And it was one most sort of touching and flattering thing was that Scott Alexander popped up on the subreddit and said, “I don't think it will be spoken to the author and he seems like a reasonable guy. And I'll bet you $1,000 to a charity of your choice should it turn out not to be in the eyes of some independent.”

SPENCER: Yeah, that's such a rational story. I love it.

TOM: Oh, that was great, isn't it? Yeah, exactly. Like, I feel like you could learn a lot about the rationalist community from how that all went down, actually. And I think to remember that Scott got Scott's charity of his choice got the money in the end. And I hope it was a really MIRI or something, you know that that'd be suitable. If it was a really rationalist charity. I would approve of that.

SPENCER: I think because it's got one that.

TOM: Yes, he did. Yes. Sorry. Yeah. Yes. Got one, the bet. And I believe the guy did say on the subreddit a couple of years later, yes, fair enough. I will pay. And I'll just say like if it's for some animal charity, then, you know, like Donkey Sanctuary, or something like that, then fine, but I do hope it's for MIRI, or Givewell, or one of the proper EA rationalist things. So that gets maximum rationalist points.

SPENCER: I have a little mini theory about why rational still worried about hatchet jobs, which is that I think I think what happens is journalists will ask to talk to them. And then rationalists will try to explain their ideas. And then journalists will write stories about whatever the weirdest social customs in the community are, and cannot completely ignore the ideas, right? Like, the idea is there's just kind of may be mentioned in passing, but it's really just about how weird the screw people are.

TOM: Yeah. I mean, it's quite hard to I mean, I had to include some in the book about polyamory and so on, because it's quite hard. You know, it is a real thing about the community, which is hard.

SPENCER: Well, yeah, it's just to clarify that, like, it's much higher in the community than it is in most parts of the world. However, it's still like a significant minority of people in the community, right?

TOM: Yeah, this is the other thing is how do you define a rationalist? Is it like, the real central circle of people who live in Berkeley group houses and, you know, exist entirely economically inside the rational community? Or is it include people who've read Slate Star Codex a few times, there are lots of concentric circles and the weirdness of the behaviors if you want to call them weird. The large majority of people who have read Slate Star Codex would probably not self-identify as rationally so I don't know, I guess, come and go. But also the actually, I tell you what, to bring this back to the conversation we were having a minute ago about wambs and nerds, I think it is also because there is an understandable paranoia about amongst a group of people who are really nerdy and not very good at judging what society will say about the things they think and believe. They'll do that example we were talking about by saying, you know, the Wehrmacht was a really good, proficient fighting force, you know, and we should learn lessons from it. And then they get in loads of trouble because people who aren't nerdy and have the more wamby approach and have the more, the lower decoupling attitude will say, “Well, this guy has praised Wehrmacht or some other, whatever it is. And so there's a real tendency among the rationalist community and other nerdy groups to sort of stumble into big Rouse, and get loads of abuse or get loads of, you know, generally get online or get hatchet jobs from whammy journalists who don't realize the sort of person they're dealing with, which is, you know, and the nerds, they're the sort of translation problem, you know, journalism is a very wamby career choice and the rationalist that's a very nerdy group of people. And quite often at the intersection, you get these gigantic failures of translation where someone will say something like to use the example we were going for, before the very act, we can learn these lessons from the fighting force Wehrmacht or whatever, or the Hitler was very hard working. Again, to use slightly ridiculous examples to keep it away from real topics. And that will cause a massive rout because the other side of it will hear I think the vernacular was good and that's not what was intended to then the end result is that the rationalist ends up very paranoid because they just arbitrarily from their point of view, or sometimes they'll say something that seems totally normal and within their normal set of conversational things, and from their point of view, arbitrarily, that one will be picked up and thrown around into that and they'll get destroyed for it. Whereas those other times they said things that seem to them exactly as exactly the same, sort of thing and they didn't get the sun double out of the clear blue sky for it. And when you get these unpredictable attacks that just happen from the point of no real reason you develop a sort of generic defense system of never saying anything to journalists who will just every so often randomly destroy you for it. I can really understand that paranoia, I think, I think is actually quite irrational paranoia, when you are not capable of judging what things are going to explode in your face. I think that will be my thinking.

SPENCER: Yeah, that's a really good point. I'd also add, I think there's another aspect of this nerd versus the idea that we didn't touch on, which is whether you think people are on a team or not. Like, I think it's a very wamb thing to assume that, okay, if you're saying something positive about Hitler, you must like be a neo-Nazi, right? And this also goes back to the coupling, decoupling thing, right? But it's sort of like, it's a wamby thing, just trying to predict what team you're on. Whereas it's a nerd thing to say, “Well, I could have like nuanced positions on 52 different topics, it doesn't mean I'm on any particular team, I can just kind of see the pros and cons of these different, you know, let's analyze the good and bad things about each different philosophy or something like this.” And I think that that comes up as well because you're gonna see journalists accusing rationalists of being, I don't know, having allegiance to some random political group, which is just clearly wrong. And it's just like, they're just totally misinterpreting what's happening. It's like, though, this is just like, an extremely nerdy way to analyze the pros and cons of different stuff.

TOM: Yeah, so this was just to do the horse thing to you. So what are the pros and cons of Nazism, Spencer? No, I take, I take your point precisely, there is a thing of wanting to divide in groups and out groups that I'm with this group, and I will take my beliefs from the social group to some degree, and someone declaring any sort of sympathy for or acknowledgment of or admiration for the strength of our outgroup beliefs, is obviously a member of the outgroup, and yet this declaration of beliefs to indicate Group Status rather than statements about the truth or otherwise of those beliefs, which actually would be a good way to segue, if I may, into the next thing, which was crony beliefs. The concept that Kevin Simler, who's the author of Robin Hanson, has a book called the Elephant in the Brain. Again, this is all stuff that I'm sure most of your listeners will know perfectly well. And you have this idea that sort of naively you think that beliefs in your brain are there to help you navigate the world, right? So you might say, “Do I believe that climate change is a real threat?” Or something like that. And whether I say yes or no, is a statement about whether or not I believe that climate change is a real threat to human flourishing, you know, but what similar argues and I think is a really fair point is that actually, beliefs are doing several jobs, certainly at least two main jobs. And one of them is signaling Group Status, just like we were just talking about, and in some respects, for example, climate change is a good example, the social group signaling is in no way a lot more important for the person holding a belief than the truth value of it. Because if I, as one person believe that climate change isn't real, the impact on the outcome of climate change will be pretty minimal. I can take fewer flights, I can eat less meat, I can try to drive less, I can vote for Green parties, or whatever. But it won't make a great deal of difference to whether or not warm is 1.5 degrees or two degrees by 2100, the actual output of my decision will be pretty minimal. But if I am a Republican in the southern United States, and I declare that I believe climate change is real, that will have a quite profound impact on my social life on my ability to interact with my friends and colleagues. Likewise, if I say the opposite, it will have an impact where if one way or another I'll have a big impact. Now similar has this really nice metaphor, which I liked, which was a concrete Bill think of beliefs as like employees in a town that's really nepotistic, and where you always have to keep the local politics on the side. So you might employ some of your business and you'll employ people because they're good at their jobs. But sometimes you also just have to hire a guy because he's the mayor's nephew. And if someone outside it comes in and looks around, they see like, seven of your employees are working really, really hard. But one guy is sitting at his computer checking football scores and picking his nose and that well, why is that guy there? Surely we should fire that guy. But he's not useless. He is providing a job that has given you political cover, which is saying he's allowing the company to continue to work without being punished by the nepotistic political system. And that's the way to think about beliefs is that some of your beliefs are there working hard to help you navigate? So do I believe it will rain later? Should I bring an umbrella? That's one thing, but whether or not I believe now stick to the climate change example, in climate change, because there's so much less of the state, you know, my clothes will get wet if I'm wrong about rain. But if I'm wrong about the truth value of the climate change claim it will make very little difference to the outcome. So it is there as a crony belief as he describes it, you know, he points out that doesn't mean it's wrong. I genuinely do believe and have done my best to investigate the claim that there is a significant risk to human flourishing if climate change is you know, completely left on top did turn spirals out of control. I'm not someone who thinks it's an existential risk, but I think it is, you know, has the potential to make human life significantly worse in large parts of the world. But then also, right, I live in North London, and I'm surrounded by the liberal elite of Britain, basically. And it will be extremely socially awkward for me to believe anything else. So a belief can be both true and crony. Again, I found that a really sort of useful little concept. Yes, that is a way of looking at how human brains work and how we think as humans, which I hadn't fully articulated before.

SPENCER: Yeah. So I agree, I think it's an extremely useful concept. And you're just stepping back and kind of taking an outside view. If you have a crony belief, where the sort of there's another side, right, there's a large other groups that kind of disagrees or believes the opposite. A priori, in some sense, it's equally likely that you're right or wrong. So it doesn't mean you're necessarily wrong, just because it's a crony belief. But you know, if it's sort of a polarized issue, and there's kind of two major perspectives on it, and half the people have the other perspective, well, could be both groups are wrong, but at least you know, a priori like this 50% since you're wrong, so I do think we can probably say with many beliefs, that they have a pretty high chance of being wrong?

TOM: Yes. I mean, that's I think that's absolutely true. I mean, like you say, it must be true for any crony belief, which is signaling membership of a group, which has a significant output, I suppose you wouldn't be signaling group membership if there wasn't a group you're signaling nonmembership of. So climate change, pretty obviously, associates itself with a left, right political divide. You can also say things about, you know, you can use them to predict other beliefs about the justice system or about race relations and all these different things. You can use someone's beliefs about climate change to predict all these other things. And yes, you're right. It must be the case that, on average, at least 50% of people's crony beliefs must be wrong since then, because so yes, we're taking an outside view. If you have good evidence to think that it's something is a crony belief of your own, then your initial point must be to think there's a very, very high chance that I'm wrong about this. I found it very hard to think of my current – that means I feel like they're very obvious in other people.

SPENCER: Is that always the way?

TOM: Yeah, exactly. I really struggled. I mean, I think that similar says is, you know, if I was to say someone, well, I think the England game kicks off at five o'clock, and someone says, “No, at six o'clock,” I wouldn't get crossed about that. You know, and you know, goddammit, kicks off at five, I will go to the barricades to defend this. You know, I would just say, All right, thanks. Yeah, that's because I need that information to navigate the world and do the things I want to get home in time to watch it, whatever. But if someone was to say, a thing I've repeatedly argued is that the world is brought down in the Max Roser, Hans Rosling sense, the world is broadly getting better life expectancies are improving child mortality, improving all that sort of stuff. I find that when people argue with me about that, when I read, I don't know, Jason Heckle or someone saying, Actually, poverty has got worse in some respect. If you twist the numbers like this, I find myself getting annoyed. And like, well, I will defend it. I'm fairly confident there's an element of cronyism in my belief by that, but I want to be in the sort of Hans Rosling, Max Roser data lead people who but you know, who can point to these big societal trends, which are broadly getting better, while also being very clear, there are big problems in the world. So I think that is probably the closest thing I can come up with to a crazy belief that I can actually put my finger on. But I think climate change, I think probably I don't I don't know if I do have a current belief in climate change, because I don't get angry when someone says climate change isn't real. I just said, “Well, I think you're probably wrong, but I'm quite uncertain about it all.” Yes, I think that'll be, I don't know, does anyone have a clear sense of what their own crony beliefs are? Do you do you think?

SPENCER: Yeah, it's just a question. Well, I like that you're pointing to emotion as one indicator, obviously. You know, there can be false positives, false negatives, but it's an interesting indicator. I'll just point out the two other indicators that somebody might believe. One, that your tribe believes it, and another tribe believes the opposite. So that seems like an obvious trait. But another trait would be if you do the thought experiment, suppose that I found really good evidence that this is false, would I be embarrassed to talk about that with my social group, would I be embarrassed to post it on social media, where my friends would see it, that kind of thing? And I think, you know, if you have those things in place, your tribe believes it. The opposite tribe believes the opposite. You'd be embarrassed to tell your tribe if you actually stopped believing it. And also, yeah, maybe you feel some emotion when people contradict it. They just seem like together seems to work pretty well. I think something about myself is that I can go into this mode very easily, where I like temporarily adopt another person's worldview, even if it is extremely different than mine.

TOM: Come you that's really good.

SPENCER: Yeah. And I also have almost no anger. It's just as kind of a personality quirk of mine. So I almost never feel angry about anything. I still feel annoyed but like, you know, frustrated, but I almost never feel angry. You know, people always talk about how Twitter enrages them or Facebook enrages them like I have no ability to identify with this because it never does that to me, no matter how crazy or terrible or harmful people's beliefs are. I'm just like, I immediately go into this mode of like, “Ha, I wonder why this person would believe the thing. That seems really surprising and weird to me that someone could come to believe this crazy thing,” you know, not like “Oh my gosh, I can't believe they blew this I'm so pissed at them,” or “They're so harmful.” So I'm sure I have crony beliefs but maybe by a kind of take more of an almost like a sociological viewpoint towards other people's beliefs.

TOM: There's a marvelous thing in the similar posts, which he sort of say actually there's a community which gives social reward for being right and showing you're working and gracefully accepting when you're wrong and taking criticism without being getting angry about it. And obviously, that's the rationalist community. And that is the forms a community of people for whom the crony belief is we should try and be as right as possible, and we should gracefully acknowledge when we are wrong, and that somewhat sidesteps the problems of crony beliefs because you get your social reward not from believing the right things but from going through the right steps. I don't know how actually true it is that does avoid the problems like that. But I like the idea. If the problem is a social reward for wrong beliefs, then you hack that, you patch it by providing a social reward for finding your beliefs in the right way. And I really like that I hope it's true. You know, this, I guess is maybe a crony belief, I don't know. But I hope it's true. And it makes intuitive sense to me that the social reward is what's important there. And so let's redirect the social reward in such a way that we reward the things we actually want.

SPENCER: It seems like a very valuable norm, to say, “Oh, you can get social points by critiquing another person's viewpoint or finding flaws in it or making better arguments.” And that's usually valuable. On the other hand, there is some downside of that to where you can get people just disagreeing to earn points or making people feel dumb by you know, showing that their argument is flawed. And I do think that that is a real danger. And I do think sometimes people feel intimidated about expressing their viewpoints, like let's say on Less Wrong, or EA forum or things like that, where they feel like people are going turn you into shreds.

TOM: Yeah, it's probably but I think the idea is rewarding humility on your own and not fake humility. But actually, I have changed my mind on this from rewarding that rather than mocking it. Or look, I will show my working, I've got these beliefs, and I can support them in these ways, then, you know, looking for the crux of disagreements like in the book, I did a marvelous thing with Anna Salomon, which is about the call the Internal Double Crux, when you're unsure about something, finding out where it is that your beliefs hinge and think. The idea is that when you disagree with someone, there'll be some crux of the disagreement, some single point. The example they gave was school uniforms. If I say school uniforms are bad and you say school uniforms are good. And it turns out the reason I believe that school uniforms are good is because I think they'll reduce bullying. But if I can show you that, actually, they don't reduce bullying, then you will no longer think they're good. So finding that crux of the disagreement, and I feel like that sort of approach to arguing. So what is it that we claim to disagree about this? You know, we often disagree on huge cloudy ideas like feminism, or climate change, or gender identity – all these things that come with this huge sort of associated ideas and effect and mood. And actually, what's a good idea in those situations to say, look, what what is the concrete thing about which we disagree in which if I can say, “Look, if we can establish it is about, I think that school uniforms really are good at reducing bullying,” then you will change my mind. So I think that that is the sort of norm you want to encourage in a sort of social situation. And you know, I'm on the outside of the rationalist community looking in, but I do tend to get the impression that they're much better at it than a lot of people. And there's a lot less of the just sort of arguing about, should we attach a positive effect to the concept of feminism? Or should we attach a negative effect to it? I like this, I don't like, you know, that I think something really laudable goes on with that attitude to it. So I agree with similar on that aspect.

SPENCER: Yeah, one of my strategies for talking about difficult topics is to try to come at it in a way that people are showing us that they can't place what team I'm on. I try to not be on any team other than just like team human flourishing. That's my goal is to only be on team human flourishing. And I want all conscious beings, not even just humans, all conscious beings to do well. But you know, it can be really tricky to wait. And you know, sort of like going back to what we were talking about before if you come at these things from a super nerdy angle, people can say, oh, this person must be all right, because you're criticizing feminism or something like that, right? And you can have this difficulty talking about these topics. If you come at it from the wamb perspective, you can't even evaluate the pros and cons at all right? It's like, well, how do you actually have these conversations that are important about difficult topics, where we can have nuanced discussions about the advantages and disadvantages or the pros and cons and so on. And you know, the best that I figured out is just talk about it in a way that nobody is using. They don't even know what team you're on, and they can't figure you out. And then you can actually have the conversation. So that's kind of my approach to this stuff.

TOM: You can often do it with analogies, I find all sorts of you know if you start out with this less heavily charged idea that you probably will agree with and then you can draw analogies from it. And I think there is a bit of skill in being a nerd while I'm translator you know, there's one of the things I found myself doing a lot in my career is sort of trying to present these ideas of the requite sort of high decoupling nerdy ideas and in such ways that they can be sort of palatable to, understandable by people who are more on the wamby end of the spectrum. I hope that this doesn't sound like I'm being derogatory but you know that because you know, some of my best friends are away but I do think –

SPENCER: It doesn't protect you from prejudice. Is that right?

TOM: Oh, no, it doesn't. No, of course, yeah, no, no, I'm sorry. That's it, I realized that's a big red light, isn't it? Red flag? Yeah, I do think there are ways that you can get these ideas across in a way. But if you sort of talk in this other context, you might agree with this. And then can we draw an analogy with this? And sometimes it doesn't work. And sometimes it does. But it can be done to some extent.

[promo]

SPENCER: I just have a few follow-up questions in terms of you're one of the only people that serve ever kind of steady the rationalists. And I assume that you sort of identify as sort of rationalist adjacent but not irrational as yourself. Is that accurate?

TOM: I think so. Yeah. I mean, I'm definitely rationalist fond, rationalist sympathetic, you know. In the whole stuff about the New York Times and Slate Star Codex, I felt very sort of protective of the rationalists. And felt this is they're being really harshly treated here. Scott was having a hard time. I don't think I could reasonably call myself a rationalist just because I feel like you need to live much more of your life in the community than I do. I have friends who are quite of a, rationalist, like [inaudible] and I hang out a lot with superforecast very tight people. But most of my life and social group is you know, old school friends, and so on and completely separated, or journalists and is totally separated from all that. So I see myself as sort of a sympathetic outsider who keeps an eye on it and translates the good bits for the rest of the world. Because I think that there are some really interesting ideas that come out of it and are easily misunderstood group who can get themselves in a lot of trouble if they're not careful for the reasons we've discussed earlier. And so I like being someone who's like, well, let's take these ideas and look at them in a way that we can so that we can treat them sympathetically and try and find what is useful about them rather than doing a thing which people can be over keen to do of just leaping on it and trying to find unacceptable things they've said and dragging our context, you know, in the example, was the van action efficient fighting machine that we were talking about earlier on, you know, so no I'm not a rationalist. I don't think I'm good enough at numbers and things to really be a very good rationalist. And I'm certainly not involved enough in the community to be a real member of the community. But I'm sympathetic and read a lot of the output and generally move on the edges of the same circles.

SPENCER: I guess there's a lot of rationalists would deny they're good with numbers, Erasmus. But that aside, you know, I feel like you know, you have this really interesting sociological perspective that almost nobody has, where you've kind of like, studied this group, you're sympathetic to it, but you're not part of it. So maybe it lets you see things that people really far out of the group can't see. But also people in the group can't see. So I'm wondering like, you'd already read the sequences, and you were familiar with Slate Star Codex, what surprised you as you dug deeper into a kind of getting to know their rationalists?

TOM: Mm-hmm. I think we talked about it a bit, but the paranoia is that the word is sort of insularity or sort of, there is the streak of paranoia, I guess, which I hadn't expected.

SPENCER: Like being misunderstood, basically.

TOM: Yeah, exactly. In hindsight, it makes perfect sense to me. But you know, I could have predicted that in hindsight is such a ridiculous thing to say. I do think you know, that example I gave earlier on of the guy saying, “Don't talk to this guy. It'll be a hatchet job.” I was kind of hurt at the time. I'm quite a nice guy, and out to destroy anyone. But on reflection, it makes sense because there is that this exact nerdy vulnerability that they have them sort of tendency to say things that will seem perfectly sensible and harmless. And then there is a sort of what to there must feel like a roving gang of sharks just outside the boat, which is sometimes periodically leaps in and bites one of the heads off for a reason, you know. And that was something that surprised me. And when I actually met rationalists, in person, which I did I have done a few times now but like the really self-declared once in meetups in Berkeley and in London, is just the sheer lack of small talk was amazing. Like, you turn up and you order a pizza and whatever and you sit down and I was expecting a seminar so you know, how are you? How are the kids you know, how, what's going on with you? I was straight in on AI risk and are straight in on you know, the big topics, are we in a simulation? There's no mucking around. And the London meetup actually, it was rather sweet because about two-thirds of the hour and a half they were there seemed to be centered. So establishing like Annual General Meeting style rules, you know, we that hold up the ball when you were about to speak or something like that. And then only in the last half hour, did they actually get around to actually talk about anything because they were the speed of some sort of analysis paralysis type thing of like, we will spend so long setting up the rules of how we speak we won't have any time left for speaking which is sweet but you know. I felt they were but that wasn't my usual experience of going to pubs in London with people. Yeah, that was difficult because I am a, I don't know if you've noticed, but I'm a nervous talker and I'm trying to make jokes to lighten, you know, social awkwardness and these things. And God they fell with it, just they felt like a sort of thump the book hitting the floor in a silent library. It was just awful is everyone. Yeah, yeah. No, we're here to talk about Iris may, you know, oh, God, I feel like such an idiot. And you get the hang of it. And it was fine. And the but it also like, again, that concern for correctness over social harmony, my dear. I remember Kathy Grace, when I met in, among other people in Berkeley, and I was doing that sort of show. So what they're asking me about is why you wanted to write this book. And I must have given like, two incompatible answers. You know, I wish I could remember what they were but sort of, you know, justifying it through. I think it's good to explain to the world what reference beliefs are. And she, I think, in wamby, most of the time, people would be sort of, well, he's either not notice or not be so impolite as to bring up the fact that I've just said two things that don't make sense, you know, that they are incompatible with each other. And that was catio. To the well, you know, just a little bit later on. He said not, which one do you mean? Okay, right. Yeah, this is awkward. And it's a real change of gears from a lot of it from the sort of conversations I'm used to having. And that was surprising to me, I guess, again, it shouldn't have been, in retrospect. These are very avowedly nerdy people and very avowedly very sort of committed to being less wrong – it's right there in the title, right. And you can't come up there and be you can't come into those meetings and expect warm, fuzzy social chat, it's going to be about getting things right and understanding stuff. I mean, I do remember that was there was a Buck Shlegeris, that was a guy, and I remember him say, Isn't this in the books, I don't feel I'm doing any breaking any confidences here, or at least any that I haven't already broken. And he said a book on this topic could be really good. And then he added, you know, if I could jump into your body and write it out, I have high confidence that it would be good. And as the implication sort of being, I have very little confidence that you could write this book. It took a little bit of getting used to this sort of kind of starkness of light, normally it is a lot of sort of social smoothing and oil pouring and like, I'm sure it'll be brilliant, every faith and everything. No, none of that. And I remember speaking to that, doing email exchange emails with Eliezer because Eliezer always just says, I'm not, I'm not talking to you for this. I'll answer technical questions by email, but I will not answer them. I will not do an interview. Okay. I remember him sort of saying about utilitarianism and saying, you know, actually, most people aren't smart enough to operate utilitarianism on a case by case basis, the way to run it is running rules, and very much the stark implication that hasn't got the common premier was used specifically and not you know that this thing you've said is so dumb, you are not smart enough to run. So it took a bit of recalibrating expectations to not take offense and just got notices just nerdiness and concern for being correct over concern for social harmony. And once you're used to it, it's fine. But just for that first half-hour or so when you're just getting used to it that took a bit of handbrake turn on how to deal with things. But I don't want to be critical. I think it's great. And I think it's a good way of running a social group. But like I say, that was an unexpected thing for me.

SPENCER: That's really interesting to hear here for a second. Is there anything that you feel you learned from saying the rational say you're going to take away like that you'll be doing differently because of it?

TOM: That's such a huge question. I think I do almost everything differently. I approach arguments differently. I'm much less confident in my own beliefs than I used to be. And much more sort of stress testing my ideas much more and much more, willing to say, well, I could well be wrong about this. A lot of my writing since the book has been about expressing or understanding rationalist ideas and trying to sort of get to what is true rather than what is socially acceptable, which you know, I felt like I was trying to do that before but certainly I feel I'm doing it more. Also, most are saying my social group is not mainly rationalist, but it is much more now. Friends but much now for people with people I've met through the book who are super forecasters and rationalists and EAs and you know, I was out for a drink in London with the Effective Altruism from 1000 hours the other day and so so that that and actually he was saying exactly that everyone in their reference community was really nervous about the book beforehand and now I've been slightly welcomed into it by not doing a hatchet job. So yeah, writing the book and investing the rationalist has changed huge amounts about my life and how I approach things intellectually, and also like literally just the people I hang around with. So everything is pretty different now.

SPENCER: Tom, thanks so much for coming on. This is really fun.

TOM: No, it's great. Really enjoyed it.

[outro]

Staff

Music

Affiliates


Click here to return to the list of all episodes.


Subscribe

Sign up to receive one helpful idea and one brand-new podcast episode each week!


Contact Us

We'd love to hear from you! To give us your feedback on the podcast, or to tell us about how the ideas from the podcast have impacted you, send us an email at:


Or connect with us on social media: