May 23, 2021
Why is YouTube such a great way to communicate research findings? Why is AI safety (or alignment) a problem? Why is it an important problem? Why is the creation of AGI (artificial general intelligence) existentially risky for us? Why is it so hard for us to specify what we want in utility functions? What are some of the proposed strategies (and their limitations) for controlling AGI? What is instrumental convergence? What is the unilateralist's curse?
Rob Miles is a science communicator focused on AI Safety and Alignment. He has a YouTube channel called Rob Miles AI and runs The Alignment Newsletter Podcast, which presents summaries of the week's research. He also collaborates with research organizations like the Machine Intelligence Research Institute and the Future of Humanity Institute to help them communicate their work.
Click here to return to the list of all episodes.
Sign up to receive one helpful idea and one brand-new podcast episode each week!
Subscribe via RSS or through one of the major podcast platforms:
Apple Podcasts Spotify Stitcher Google Podcasts TuneIn Amazon RSS
Host / Director
Broke for Free
Quiet Music for Tiny Robots
Please note that Clearer Thinking , Mind Ease , and UpLift are all affiliated with this podcast.