Try Premium for 30 days

Live games for all NFL, MLB, NBA, & NHL teams
Commercial-Free Music
No Display Ads
The Future of Life Institute-logo

The Future of Life Institute

Technology Podcasts >

More Information

Location:

United States

Twitter:

@FLIxrisk

Language:

English


Episodes

Beneficial AI And Existential Hope In 2018

12/21/2017
More
For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we've built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how...

Duration: 00:37:25


Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe

11/30/2017
More
What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future? To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona...

Duration: 00:34:59


AI Ethics, A Trolley Problem, And A Twitter Ghost Story With Joshua Greene And Iyad Rahwan

10/30/2017
More
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI? To help consider these questions, Joshua Greene and Iyad Rawhan...

Duration: 00:45:24


80,000 Hours with Rob Wiblin and Brenton Mayer

9/29/2017
More
If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours tries to answer. They try to figure out how individuals can set themselves up to help as many people as possible in as big a way as possible. To learn more about their research, Ariel invited Rob Wiblin...

Duration: 00:58:41


Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark

8/29/2017
More
Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. “It” is Max Tegmark's new book, Life 3.0: Being Human in the Age of Artificial Intelligence. In this interview, Ariel speaks with Max about the future of artificial intelligence. What will happen when machines surpass humans...

Duration: 00:34:48


The Art Of Predicting With Anthony Aguirre And Andrew Critch

7/31/2017
More
How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks. Visit metaculus.com to try your hand at...

Duration: 00:57:54


Banning Nuclear & Autonomous Weapons With Richard Moyes And Miriam Struyk

6/30/2017
More
How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36....

Duration: 00:41:01


Creative AI With Mark Riedl & Scientists Support A Nuclear Ban

6/1/2017
More
This is a special two-part podcast. First, Mark and Ariel discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining "common sense reasoning." They also discuss the “big red button” problem in AI safety research, the process of teaching "rationalization" to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work has focused on human-AI interaction and...

Duration: 00:43:50


Climate Change With Brian Toon And Kevin Trenberth

4/27/2017
More
I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different climate discussion: not about whether climate change is real, but about what it is, what its effects could be, and how can we prepare for the future.

Duration: 00:46:55


Law and Ethics of AI with Ryan Jenkins and Matt Scherer

3/31/2017
More
The rise of artificial intelligence presents not only technical challenges, but important legal and ethical challenges for society, especially regarding machines like autonomous weapons and self-driving cars. To discuss these issues, I interviewed Matt Scherer and Ryan Jenkins. Matt is an attorney and legal scholar whose scholarship focuses on the intersection between law and artificial intelligence. Ryan is an assistant professor of philosophy and a senior fellow at the Ethics and...

Duration: 00:58:19


AI Breakthroughs With Ian Goodfellow And Richard Mallah

1/31/2017
More
2016 saw some significant AI developments. To talk about the AI progress of the last year, we turned to Richard Mallah and Ian Goodfellow. Richard is the director of AI projects at FLI, he’s the Senior Advisor to multiple AI companies, and he created the highest-rated enterprise text analytics platform. Ian is a research scientist at OpenAI, he’s the lead author of a deep learning textbook, and he’s the inventor of Generative Adversarial Networks. Listen to the podcast here or review the...

Duration: 00:54:13


FLI 2016 - A Year In Reivew

12/30/2016
More
FLI's founders and core team -- Max Tegmark, Meia Chita-Tegmark, Anthony Aguirre, Victoria Krakovna, Richard Mallah, Lucas Perry, David Stanley, and Ariel Conn -- discuss the developments of 2016 they were most excited about, as well as why they're looking forward to 2017.

Duration: 00:32:21


Heather Roff and Peter Asaro on Autonomous Weapons

11/30/2016
More
Drs. Heather Roff and Peter Asaro, two experts in autonomous weapons, talk about their work to understand and define the role of autonomous weapons, the problems with autonomous weapons, and why the ethical issues surrounding autonomous weapons are so much more complicated than other AI systems.

Duration: 00:33:57


Nuclear Winter With Alan Robock and Brian Toon

10/31/2016
More
I recently sat down with Meteorologist Alan Robock from Rutgers University and physicist Brian Toon from the University of Colorado to discuss what is potentially the most devastating consequence of nuclear war: nuclear winter.

Duration: 00:46:44


Robin Hanson On The Age Of Em

9/27/2016
More
Dr. Robin Hanson talks about the Age of Em, the future and evolution of humanity, and his research for his next book.

Duration: 00:24:38


Nuclear Risk In The 21st Century

9/20/2016
More
In this podcast interview, Lucas and Ariel discuss the concepts of nuclear deterrence, hair trigger alert, the potential consequences of nuclear war, and how individuals can do their part to lower the risks of nuclear catastrophe.

Duration: 00:15:34


Earthquakes As Existential Risks?

7/25/2016
More
Could an earthquake become an existential or catastrophic risk that puts all of humanity at risk? Seth Baum of the Global Catastrophic Risk Institute and Ariel Conn of the Future of Life Institute consider extreme earthquake scenarios to figure out if such a risk is plausible. Featuring seismologist Martin Chapman of Virginia Tech. (Edit: This was just for fun, in a similar vein to MythBusters. We wanted to see just how far we could go.)

Duration: 00:27:36


nuclear_interview_David_Wright

1/14/2016
More
nuclear_interview_David_Wright by Future of Life Institute

Duration: 00:27:27


Climate interview with Seth Baum

12/22/2015
More
An interview with Seth Baum, Executive Director of the Global Catastrophic Risk Institute, about whether the Paris Climate Agreement can be considered a success.

Duration: 00:12:31