
AXRP - the AI X-risk Research Podcast
Science Podcasts
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Location:
United States
Genres:
Science Podcasts
Description:
AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.
Language:
English
Website:
https://axrpodcast.libsyn.com/
Email:
feedback@axrp.net
49 - Caspar Oesterheld on Program Equilibrium
Duration:02:32:07
48 - Guive Assadi on AI Property Rights
Duration:02:05:34
47 - David Rein on METR Time Horizons
Duration:01:47:17
46 - Tom Davidson on AI-enabled Coups
Duration:02:05:26
45 - Samuel Albanie on DeepMind's AGI Safety Approach
Duration:01:15:42
44 - Peter Salib on AI Rights for Human Safety
Duration:03:21:33
43 - David Lindner on Myopic Optimization with Non-myopic Approval
Duration:01:40:59
42 - Owain Evans on LLM Psychology
Duration:02:14:26
41 - Lee Sharkey on Attribution-based Parameter Decomposition
Duration:02:16:11
40 - Jason Gross on Compact Proofs and Interpretability
Duration:02:36:05
38.8 - David Duvenaud on Sabotage Evaluations and the Post-AGI Future
Duration:00:20:42
38.7 - Anthony Aguirre on the Future of Life Institute
Duration:00:22:39
38.6 - Joel Lehman on Positive Visions of AI
Duration:00:15:28
38.5 - Adrià Garriga-Alonso on Detecting AI Scheming
Duration:00:27:41
38.4 - Shakeel Hashim on AI Journalism
Duration:00:24:14
38.3 - Erik Jenner on Learned Look-Ahead
Duration:00:23:46
39 - Evan Hubinger on Model Organisms of Misalignment
Duration:01:45:47
38.2 - Jesse Hoogland on Singular Learning Theory
Duration:00:18:18
39 - Evan Hubinger on Model Organisms of Misalignment
Duration:01:45:44
38.1 - Alan Chan on Agent Infrastructure
Duration:00:24:48