The Future of Life Institute-logo

The Future of Life Institute

Technology Podcasts >

More Information

Location:

United States

Twitter:

@FLIxrisk

Language:

English


Episodes

On Becoming a Moral Realist with Peter Singer

10/18/2018
More
Are there such things as moral facts? If so, how might we be able to access them? Peter Singer started his career as a preference utilitarian and a moral anti-realist, and then over time became a hedonic utilitarian and a moral realist. How does such a transition occur, and which positions are more defensible? How might objectivism in ethics affect AI alignment? What does this all mean for the future of AI? On Becoming a Moral Realist with Peter Singer is the sixth podcast in the AI...

Duration:00:51:13

On the Future: An Interview with Martin Rees

10/11/2018
More
How can humanity survive the next century of climate change, a growing population, and emerging technological threats? Where do we stand now, and what steps can we take to cooperate and address our greatest existential risks? In this special podcast episode, Ariel speaks with cosmologist Martin Rees about his new book, On the Future: Prospects for Humanity, which discusses humanity’s existential risks and the role that technology plays in determining our collective future. Topics discussed...

Duration:00:53:01

AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz

9/27/2018
More
On this month’s podcast, Ariel spoke with Paul Scharre and Mike Horowitz from the Center for a New American Security about the role of automation in the nuclear sphere, and how the proliferation of AI technologies could change nuclear posturing and the effectiveness of deterrence. Paul is a former Pentagon policy official, and the author of Army of None: Autonomous Weapons in the Future of War. Mike Horowitz is professor of political science at the University of Pennsylvania, and the...

Duration:00:51:11

Moral Uncertainty and the Path to AI Alignment with William MacAskill

9/17/2018
More
How are we to make progress on AI alignment given moral uncertainty? What are the ideal ways of resolving conflicting value systems and views of morality among persons? How ought we to go about AI alignment given that we are unsure about our normative and metaethical theories? How should preferences be aggregated and persons idealized in the context of our uncertainty? Moral Uncertainty and the Path to AI Alignment with William MacAskill is the fifth podcast in the new AI Alignment series,...

Duration:00:56:56