June 9, 2022
How hard is it to arrive at true beliefs about the world? How can you find enjoyment in being wrong? When presenting claims that will be scrutinized by others, is it better to hedge and pad the claims in lots of caveats and uncertainty, or to strive for a tone that matches (or perhaps even exaggerates) the intensity with which you hold your beliefs? Why should you maybe focus on drilling small skills when learning a new skill set? What counts as a "simple" question? How can you tell when you actually understand something and when you don't? What is "cargo culting"? Which features of AI are likely in the future to become existential threats? What are the hardest parts of AI research? What skills will we probably really wish we had on the eve of deploying superintelligent AIs?
Buck Shlegeris is the CTO of Redwood Research, an independent AI alignment research organization. He currently leads their interpretability research. He previously worked on research and outreach at the Machine Intelligence Research Institute. His website is shlegeris.com.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of the major podcast platforms:
Apple Podcasts
Spotify
Stitcher
Google Podcasts
TuneIn
Amazon
RSS
Host / Director
Spencer Greenberg
Producer
Josh Castle
Audio Engineer
Ryan Kessler
Factotum
Uri Bram
Transcriptionists
WeAmplify
Music
Lee Rosevere
Josh Woodward
Broke for Free
zapsplat.com
wowamusic
Quiet Music for Tiny Robots
Affiliates
Please note that
Clearer Thinking
,
Mind Ease
, and
UpLift
are all affiliated with this podcast.