The Future of Life Institute-logo

The Future of Life Institute

Technology Podcasts >

More Information


United States






AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry

What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration? Ariel spoke with FLI's Meia Chita-Tegmark and Lucas Perry on this month's podcast about the value...


Top AI Breakthroughs and Challenges of 2017

AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning -- 2017 was another year of big breakthroughs and big challenges for AI researchers! To discuss this more, we invited FLI's Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month's podcast. They talked about some of the progress they were most excited to see last year and what they're looking forward to in the coming year.


Beneficial AI And Existential Hope In 2018

For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we've built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how...


Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe

What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future? To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona...


AI Ethics, A Trolley Problem, And A Twitter Ghost Story With Joshua Greene And Iyad Rahwan

As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI? To help consider these questions, Joshua Greene and Iyad Rawhan...


Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark

Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. “It” is Max Tegmark's new book, Life 3.0: Being Human in the Age of Artificial Intelligence. In this interview, Ariel speaks with Max about the future of artificial intelligence. What will happen when machines surpass humans...


The Art Of Predicting With Anthony Aguirre And Andrew Critch

How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks. Visit to try your hand at...


Banning Nuclear & Autonomous Weapons With Richard Moyes And Miriam Struyk

How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36....


Creative AI With Mark Riedl & Scientists Support A Nuclear Ban

This is a special two-part podcast. First, Mark and Ariel discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining "common sense reasoning." They also discuss the “big red button” problem in AI safety research, the process of teaching "rationalization" to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work has focused on human-AI interaction and...


Climate Change With Brian Toon And Kevin Trenberth

I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different climate discussion: not about whether climate change is real, but about what it is, what its effects could be, and how can we prepare for the future.


AI Breakthroughs With Ian Goodfellow And Richard Mallah

2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the...


FLI 2016 - A Year In Reivew

FLI's founders and core team -- Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Victoria Krakovna, Richard Mallah, Lucas Perry, David Stanley, and Ariel Conn -- discuss the developments of 2016 they were most excited about, as well as why they're looking forward to 2017.


Heather Roff and Peter Asaro on Autonomous Weapons

Drs. Heather Roff and Peter Asaro, two experts in autonomous weapons, talk about their work to understand and define the role of autonomous weapons, the problems with autonomous weapons, and why the ethical issues surrounding autonomous weapons are so much more complicated than other AI systems.


Nuclear Winter With Alan Robock and Brian Toon

I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.


Robin Hanson On The Age Of Em

Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.


Nuclear Risk In The 21st Century

In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.


Earthquakes As Existential Risks?

Could an earthquake become an existential or catastrophic risk that puts all of humanity at risk? Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of the Future of Life Institute consider extreme earthquake scenarios to figure out if such a risk is plausible. Featuring seismologist Martin Chapman of Virginia Tech. (Edit: This was just for fun, in a similar vein to MythBusters. We wanted to see just how far we could go.)



nuclear_interview_David_Wright by Future of Life Institute


Climate interview with Seth Baum

An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute, about whether the Paris Climate Agreement can be considered a success.