The Future of Life Institute-logo

The Future of Life Institute

Technology Podcasts >

More Information

Location:

United States

Twitter:

@FLIxrisk

Language:

English


Episodes

On Becoming a Moral Realist with Peter Singer

10/18/2018
More
Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI? On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI...

Duration:00:51:13

On the Future: An Interview with Martin Rees

10/11/2018
More
How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Topics discussed...

Duration:00:53:01

AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

9/27/2018
More
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the...

Duration:00:51:11

Moral Uncertainty and the Path to AI Alignment with William MacAskill

9/17/2018
More
How are we to make progress on AI alignment given moral uncertainty? What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty? Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series,...

Duration:00:56:56

AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins

8/31/2018
More
Experts predict that artificial intelligence could become the most transformative innovation in history, eclipsing both the development of agriculture and the industrial revolution. And the technology is developing far faster than the average bureaucracy can keep up with. How can local, national, and international governments prepare for such dramatic changes and help steer AI research and use in a more beneficial direction? On this month’s podcast, Ariel spoke with Allan Dafoe and Jessica...

Duration:00:44:17

Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler

5/31/2018
More
With the U.S. pulling out of the Iran deal and canceling (and potentially un-canceling) the summit with North Korea, nuclear weapons have been front and center in the news this month. But will these disagreements lead to a world with even more nuclear weapons? And how did the recent nuclear situations with North Korea and Iran get so tense? To learn more about the geopolitical issues surrounding North Korea’s and Iran’s nuclear situations, as well as to learn how nuclear programs in these...

Duration:00:42:26

What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville

4/30/2018
More
What are the odds of a nuclear war happening this century? And how close have we been to nuclear war in the past? Few academics focus on the probability of nuclear war, but many leading voices like former US Secretary of Defense, William Perry, argue that the threat of nuclear conflict is growing. On this month's podcast, Ariel spoke with Seth Baum and Robert de Neufville from the Global Catastrophic Risk Institute (GCRI), who recently coauthored a report titled A Model for the Probability...

Duration:00:57:49

Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell

4/25/2018
More
Inverse Reinforcement Learning and Inferring Human Preferences is the first podcast in the new AI Alignment series, hosted by Lucas Perry. This series will be covering and exploring the AI alignment problem across a large variety of domains, reflecting the fundamentally interdisciplinary nature of AI alignment. Broadly, we will be having discussions with technical and non-technical researchers across a variety of areas, such as machine learning, AI safety, governance, coordination, ethics,...

Duration:01:24:53

Navigating AI Safety -- From Malicious Use to Accidents

3/30/2018
More
Is the malicious use of artificial intelligence inevitable? If the history of technological progress has taught us anything, it's that every "beneficial" technological breakthrough can be used to cause harm. How can we keep bad actors from using otherwise beneficial AI technology to hurt others? How can we ensure that AI technology is designed thoughtfully to prevent accidental harm or misuse? On this month's podcast, Ariel spoke with FLI co-founder Victoria Krakovna and Shahar Avin from...

Duration:00:57:58

AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry

2/27/2018
More
What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value? Building safe and beneficial AI involves tricky technical research problems, but it also requires input from philosophers, ethicists, and psychologists on these fundamental questions. How can we ensure the most effective collaboration? Ariel spoke with FLI's Meia Chita-Tegmark and Lucas Perry on this month's podcast about the value...

Duration:00:49:30

Top AI Breakthroughs and Challenges of 2017

1/31/2018
More
AlphaZero, progress in meta-learning, the role of AI in fake news, the difficulty of developing fair machine learning -- 2017 was another year of big breakthroughs and big challenges for AI researchers! To discuss this more, we invited FLI's Richard Mallah and Chelsea Finn from UC Berkeley to join Ariel for this month's podcast. They talked about some of the progress they were most excited to see last year and what they're looking forward to in the coming year.

Duration:00:30:49

Beneficial AI And Existential Hope In 2018

12/21/2017
More
For most of us, 2017 has been a roller coaster, from increased nuclear threats to incredible advancements in AI to crazy news cycles. But while it’s easy to be discouraged by various news stories, we at FLI find ourselves hopeful that we can still create a bright future. In this episode, the FLI team discusses the past year and the momentum we've built, including: the Asilomar Principles, our 2018 AI safety grants competition, the recent Long Beach workshop on Value Alignment, and how...

Duration:00:37:25

Balancing the Risks of Future Technologies With Andrew Maynard and Jack Stilgoe

11/30/2017
More
What does it means for technology to “get it right,” and why do tech companies ignore long-term risks in their research? How can we balance near-term and long-term AI risks? And as tech companies become increasingly powerful, how can we ensure that the public has a say in determining our collective future? To discuss how we can best prepare for societal risks, Ariel spoke with Andrew Maynard and Jack Stilgoe on this month’s podcast. Andrew directs the Risk Innovation Lab in the Arizona...

Duration:00:34:59

AI Ethics, the Trolley Problem, and a Twitter Ghost Story with Joshua Greene And Iyad Rahwan

10/30/2017
More
As technically challenging as it may be to develop safe and beneficial AI, this challenge also raises some thorny questions regarding ethics and morality, which are just as important to address before AI is too advanced. How do we teach machines to be moral when people can't even agree on what moral behavior is? And how do we help people deal with and benefit from the tremendous disruptive change that we anticipate from AI? To help consider these questions, Joshua Greene and Iyad Rawhan...

Duration:00:45:24

80,000 Hours with Rob Wiblin and Brenton Mayer

9/29/2017
More
If you want to improve the world as much as possible, what should you do with your career? Should you become a doctor, an engineer or a politician? Should you try to end global poverty, climate change, or international conflict? These are the questions that the research group, 80,000 Hours tries to answer. They try to figure out how individuals can set themselves up to help as many people as possible in as big a way as possible. To learn more about their research, Ariel invited Rob Wiblin...

Duration:00:58:41

Life 3.0: Being Human in the Age of Artificial Intelligence with Max Tegmark

8/29/2017
More
Elon Musk has called it a compelling guide to the challenges and choices in our quest for a great future of life on Earth and beyond, while Stephen Hawking and Ray Kurzweil have referred to it as an introduction and guide to the most important conversation of our time. “It” is Max Tegmark's new book, Life 3.0: Being Human in the Age of Artificial Intelligence. In this interview, Ariel speaks with Max about the future of artificial intelligence. What will happen when machines surpass humans...

Duration:00:34:48

The Art Of Predicting With Anthony Aguirre And Andrew Critch

7/31/2017
More
How well can we predict the future? In this podcast, Ariel speaks with Anthony Aguirre and Andrew Critch about the art of predicting the future, what constitutes a good prediction, and how we can better predict the advancement of artificial intelligence. They also touch on the difference between predicting a solar eclipse and predicting the weather, what it takes to make money on the stock market, and the bystander effect regarding existential risks. Visit metaculus.com to try your hand at...

Duration:00:57:54

Banning Nuclear & Autonomous Weapons With Richard Moyes And Miriam Struyk

6/30/2017
More
How does a weapon go from one of the most feared to being banned? And what happens once the weapon is finally banned? To discuss these questions, Ariel spoke with Miriam Struyk and Richard Moyes on the podcast this month. Miriam is Programs Director at PAX. She played a leading role in the campaign banning cluster munitions and developed global campaigns to prohibit financial investments in producers of cluster munitions and nuclear weapons. Richard is the Managing Director of Article 36....

Duration:00:41:01

Creative AI With Mark Riedl & Scientists Support A Nuclear Ban

6/1/2017
More
This is a special two-part podcast. First, Mark and Ariel discuss how AIs can use stories and creativity to understand and exhibit culture and ethics, while also gaining "common sense reasoning." They also discuss the “big red button” problem in AI safety research, the process of teaching "rationalization" to AIs, and computational creativity. Mark is an associate professor at the Georgia Tech School of interactive computing, where his recent work has focused on human-AI interaction and...

Duration:00:43:50

Climate Change With Brian Toon And Kevin Trenberth

4/27/2017
More
I recently visited the National Center for Atmospheric Research in Boulder, CO and met with climate scientists Dr. Kevin Trenberth and CU Boulder’s Dr. Brian Toon to have a different climate discussion: not about whether climate change is real, but about what it is, what its effects could be, and how can we prepare for the future.

Duration:00:46:55