The Future of Life Institute-logo

The Future of Life Institute

Technology Podcasts >

More Information


United States






AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy

What role does cyber security play in alignment and safety? What is AI completeness? What is the space of mind design and what does it tell us about AI safety? How does the possibility of machine qualia fit into this space? Can we leak proof the singularity to ensure we are able to test AGI? And what is computational complexity theory anyway? AI Safety, Possible Minds, and Simulated Worlds is the third podcast in the new AI Alignment series, hosted by Lucas Perry. For those of you that are...


Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams

How are emerging technologies like artificial intelligence shaping our world and how we interact with one another? What do different demographics think about AI risk and a robot-filled future? And how can the average citizen contribute not only to the AI discussion, but AI's development? On this month's podcast, Ariel spoke with Charlie Oliver and Randi Williams about how technology is reshaping our world, and how their new project, Mission AI, aims to broaden the conversation and include...


Astronomical Future Suffering and Superintelligence with Kaj Sotala

In the classic taxonomy of risks developed by Nick Bostrom, existential risks are characterized as risks which are both terminal in severity and transgenerational in scope. If we were to maintain the scope of a risk as transgenerational and increase its severity past terminal, what would such a risk look like? What would it mean for a risk to be transgenerational in scope and hellish in severity? In this podcast, Lucas spoke with Kaj Sotala, a fifth year Ph.D student at UC Berkeley and an...


Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler

With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these...


What are the odds of nuclear war? A conversation with Seth Baum and Robert DeNeufville

What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing. On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability...


Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell

Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics,...


Navigating AI Safety -- From Malicious Use to Accidents

Is the malicious use of artificial intelligence inevitable? If the history of technological progress has taught us anything, it's that every "beneficial" technological breakthrough can be used to cause harm. How can we keep bad actors from using otherwise beneficial AI technology to hurt others? How can we ensure that AI technology is designed thoughtfully to prevent accidental harm or misuse? On this month's podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from...


Robin Hanson On The Age Of Em

Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.


Earthquakes As Existential Risks?

Could an earthquake become an existential or catastrophic risk that puts all of humanity at risk? Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of the Future of Life Institute consider extreme earthquake scenarios to figure out if such a risk is plausible. Featuring seismologist Martin Chapman of Virginia Tech. (Edit: This was just for fun, in a similar vein to MythBusters. We wanted to see just how far we could go.)