AXRP - the AI X-risk Research Podcast-logo

AXRP - the AI X-risk Research Podcast

Science Podcasts

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.

Location:

United States

Description:

AXRP (pronounced axe-urp) is the AI X-risk Research Podcast where I, Daniel Filan, have conversations with researchers about their papers. We discuss the paper, and hopefully get a sense of why it's been written and how it might reduce the risk of AI causing an existential catastrophe: that is, permanently and drastically curtailing humanity's future potential. You can visit the website and read transcripts at axrp.net.

Language:

English


Episodes
Ask host to enable sharing for playback control

31 - Singular Learning Theory with Daniel Murfet

5/6/2024
What's going on with deep learning? What sorts of models get learned, and what are the learning dynamics? Singular learning theory is a theory of Bayesian statistics broad enough in scope to encompass deep neural networks that may help answer these questions. In this episode, I speak with Daniel Murfet about this research program and what it tells us. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:26 - What is singular learning theory? 0:16:00 - Phase transitions 0:35:12 - Estimating the local learning coefficient 0:44:37 - Singular learning theory and generalization 1:00:39 - Singular learning theory vs other deep learning theory 1:17:06 - How singular learning theory hit AI alignment 1:33:12 - Payoffs of singular learning theory for AI alignment 1:59:36 - Does singular learning theory advance AI capabilities? 2:13:02 - Open problems in singular learning theory for AI alignment 2:20:53 - What is the singular fluctuation? 2:25:33 - How geometry relates to information 2:30:13 - Following Daniel Murfet's work The transcript: https://axrp.net/episode/2024/05/07/episode-31-singular-learning-theory-dan-murfet.html Daniel Murfet's twitter/X account: https://twitter.com/danielmurfet Developmental interpretability website: https://devinterp.com Developmental interpretability YouTube channel: https://www.youtube.com/@Devinterp Main research discussed in this episode: - Developmental Landscape of In-Context Learning: https://arxiv.org/abs/2402.02364 - Estimating the Local Learning Coefficient at Scale: https://arxiv.org/abs/2402.03698 - Simple versus Short: Higher-order degeneracy and error-correction: https://www.lesswrong.com/posts/nWRj6Ey8e5siAEXbK/simple-versus-short-higher-order-degeneracy-and-error-1 Other links: - Algebraic Geometry and Statistical Learning Theory (the grey book): https://www.cambridge.org/core/books/algebraic-geometry-and-statistical-learning-theory/9C8FD1BDC817E2FC79117C7F41544A3A - Mathematical Theory of Bayesian Statistics (the green book): https://www.routledge.com/Mathematical-Theory-of-Bayesian-Statistics/Watanabe/p/book/9780367734817 In-context learning and induction heads: https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html - Saddle-to-Saddle Dynamics in Deep Linear Networks: Small Initialization Training, Symmetry, and Sparsity: https://arxiv.org/abs/2106.15933 - A mathematical theory of semantic development in deep neural networks: https://www.pnas.org/doi/abs/10.1073/pnas.1820226116 - Consideration on the Learning Efficiency Of Multiple-Layered Neural Networks with Linear Units: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4404877 - Neural Tangent Kernel: Convergence and Generalization in Neural Networks: https://arxiv.org/abs/1806.07572 - The Interpolating Information Criterion for Overparameterized Models: https://arxiv.org/abs/2307.07785 - Feature Learning in Infinite-Width Neural Networks: https://arxiv.org/abs/2011.14522 - A central AI alignment problem: capabilities generalization, and the sharp left turn: https://www.lesswrong.com/posts/GNhMPAWcfBCASy8e6/a-central-ai-alignment-problem-capabilities-generalization - Quantifying degeneracy in singular models via the learning coefficient: https://arxiv.org/abs/2308.12108 Episode art by Hamish Doodles: hamishdoodles.com

Duration:02:32:07

Ask host to enable sharing for playback control

30 - AI Security with Jeffrey Ladish

4/30/2024
Top labs use various forms of "safety training" on models before their release to make sure they don't do nasty stuff - but how robust is that? How can we ensure that the weights of powerful AIs don't get leaked or stolen? And what can AI even do these days? In this episode, I speak with Jeffrey Ladish about security and AI. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:38 - Fine-tuning away safety training 0:13:50 - Dangers of open LLMs vs internet search 0:19:52 - What we learn by undoing safety filters 0:27:34 - What can you do with jailbroken AI? 0:35:28 - Security of AI model weights 0:49:21 - Securing against attackers vs AI exfiltration 1:08:43 - The state of computer security 1:23:08 - How AI labs could be more secure 1:33:13 - What does Palisade do? 1:44:40 - AI phishing 1:53:32 - More on Palisade's work 1:59:56 - Red lines in AI development 2:09:56 - Making AI legible 2:14:08 - Following Jeffrey's research The transcript: axrp.net/episode/2024/04/30/episode-30-ai-security-jeffrey-ladish.html Palisade Research: palisaderesearch.org Jeffrey's Twitter/X account: twitter.com/JeffLadish Main papers we discussed: - LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B: arxiv.org/abs/2310.20624 - BadLLaMa: Cheaply Removing Safety Fine-tuning From LLaMa 2-Chat 13B: arxiv.org/abs/2311.00117 - Securing Artificial Intelligence Model Weights: rand.org/pubs/working_papers/WRA2849-1.html Other links: - Llama 2: Open Foundation and Fine-Tuned Chat Models: https://arxiv.org/abs/2307.09288 - Fine-tuning Aligned Language Models Compromises Safety, Even When Users Do Not Intend To!: https://arxiv.org/abs/2310.03693 - Shadow Alignment: The Ease of Subverting Safely-Aligned Language Models: https://arxiv.org/abs/2310.02949 - On the Societal Impact of Open Foundation Models (Stanford paper on marginal harms from open-weight models): https://crfm.stanford.edu/open-fms/ - The Operational Risks of AI in Large-Scale Biological Attacks (RAND): https://www.rand.org/pubs/research_reports/RRA2977-2.html - Preventing model exfiltration with upload limits: https://www.alignmentforum.org/posts/rf66R4YsrCHgWx9RG/preventing-model-exfiltration-with-upload-limits - A deep dive into an NSO zero-click iMessage exploit: Remote Code Execution: https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-into-nso-zero-click.html - In-browser transformer inference: https://aiserv.cloud/ - Anatomy of a rental phishing scam: https://jeffreyladish.com/anatomy-of-a-rental-phishing-scam/ - Causal Scrubbing: a method for rigorously testing interpretability hypotheses: https://www.alignmentforum.org/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing Episode art by Hamish Doodles: hamishdoodles.com

Duration:02:15:44

Ask host to enable sharing for playback control

29 - Science of Deep Learning with Vikrant Varma

4/25/2024
In 2022, it was announced that a fairly simple method can be used to extract the true beliefs of a language model on any given topic, without having to actually understand the topic at hand. Earlier, in 2021, it was announced that neural networks sometimes 'grok': that is, when training them on certain tasks, they initially memorize their training data (achieving their training goal in a way that doesn't generalize), but then suddenly switch to understanding the 'real' solution in a way that generalizes. What's going on with these discoveries? Are they all they're cracked up to be, and if so, how are they working? In this episode, I talk to Vikrant Varma about his research getting to the bottom of these questions. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:36 - Challenges with unsupervised LLM knowledge discovery, aka contra CCS 0:00:36 - What is CCS? 0:09:54 - Consistent and contrastive features other than model beliefs 0:20:34 - Understanding the banana/shed mystery 0:41:59 - Future CCS-like approaches 0:53:29 - CCS as principal component analysis 0:56:21 - Explaining grokking through circuit efficiency 0:57:44 - Why research science of deep learning? 1:12:07 - Summary of the paper's hypothesis 1:14:05 - What are 'circuits'? 1:20:48 - The role of complexity 1:24:07 - Many kinds of circuits 1:28:10 - How circuits are learned 1:38:24 - Semi-grokking and ungrokking 1:50:53 - Generalizing the results 1:58:51 - Vikrant's research approach 2:06:36 - The DeepMind alignment team 2:09:06 - Follow-up work The transcript: axrp.net/episode/2024/04/25/episode-29-science-of-deep-learning-vikrant-varma.html Vikrant's Twitter/X account: twitter.com/vikrantvarma_ Main papers: - Challenges with unsupervised LLM knowledge discovery: arxiv.org/abs/2312.10029 - Explaining grokking through circuit efficiency: arxiv.org/abs/2309.02390 Other works discussed: - Discovering latent knowledge in language models without supervision (CCS): arxiv.org/abs/2212.03827 - Eliciting Latent Knowledge: How to Tell if your Eyes Deceive You: https://docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit - Discussion: Challenges with unsupervised LLM knowledge discovery: lesswrong.com/posts/wtfvbsYjNHYYBmT3k/discussion-challenges-with-unsupervised-llm-knowledge-1 - Comment thread on the banana/shed results: lesswrong.com/posts/wtfvbsYjNHYYBmT3k/discussion-challenges-with-unsupervised-llm-knowledge-1?commentId=hPZfgA3BdXieNfFuY - Fabien Roger, What discovering latent knowledge did and did not find: lesswrong.com/posts/bWxNPMy5MhPnQTzKz/what-discovering-latent-knowledge-did-and-did-not-find-4 - Scott Emmons, Contrast Pairs Drive the Performance of Contrast Consistent Search (CCS): lesswrong.com/posts/9vwekjD6xyuePX7Zr/contrast-pairs-drive-the-empirical-performance-of-contrast - Grokking: Generalizing Beyond Overfitting on Small Algorithmic Datasets: arxiv.org/abs/2201.02177 - Keeping Neural Networks Simple by Minimizing the Minimum Description Length of the Weights (Hinton 1993 L2): dl.acm.org/doi/pdf/10.1145/168304.168306 - Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.0521 Episode art by Hamish Doodles: hamishdoodles.com

Duration:02:13:46

Ask host to enable sharing for playback control

28 - Suing Labs for AI Risk with Gabriel Weil

4/17/2024
How should the law govern AI? Those concerned about existential risks often push either for bans or for regulations meant to ensure that AI is developed safely - but another approach is possible. In this episode, Gabriel Weil talks about his proposal to modify tort law to enable people to sue AI companies for disasters that are "nearly catastrophic". Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:35 - The basic idea 0:20:36 - Tort law vs regulation 0:29:10 - Weil's proposal vs Hanson's proposal 0:37:00 - Tort law vs Pigouvian taxation 0:41:16 - Does disagreement on AI risk make this proposal less effective? 0:49:53 - Warning shots - their prevalence and character 0:59:17 - Feasibility of big changes to liability law 1:29:17 - Interactions with other areas of law 1:38:59 - How Gabriel encountered the AI x-risk field 1:42:41 - AI x-risk and the legal field 1:47:44 - Technical research to help with this proposal 1:50:47 - Decisions this proposal could influence 1:55:34 - Following Gabriel's research The transcript: axrp.net/episode/2024/04/17/episode-28-tort-law-for-ai-risk-gabriel-weil.html Links for Gabriel: - SSRN page: papers.ssrn.com/sol3/cf_dev/AbsByAuth.cfm?per_id=1648032 - Twitter/X account: twitter.com/gabriel_weil Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence: papers.ssrn.com/sol3/papers.cfm?abstract_id=4694006 Other links: - Foom liability: overcomingbias.com/p/foom-liability - Punitive Damages: An Economic Analysis: law.harvard.edu/faculty/shavell/pdf/111_Harvard_Law_Rev_869.pdf - Efficiency, Fairness, and the Externalization of Reasonable Risks: The Problem With the Learned Hand Formula: papers.ssrn.com/sol3/papers.cfm?abstract_id=4466197 - Tort Law Can Play an Important Role in Mitigating AI Risk: forum.effectivealtruism.org/posts/epKBmiyLpZWWFEYDb/tort-law-can-play-an-important-role-in-mitigating-ai-risk - How Technical AI Safety Researchers Can Help Implement Punitive Damages to Mitigate Catastrophic AI Risk: forum.effectivealtruism.org/posts/yWKaBdBygecE42hFZ/how-technical-ai-safety-researchers-can-help-implement - Can the courts save us from dangerous AI? [Vox]: vox.com/future-perfect/2024/2/7/24062374/ai-openai-anthropic-deepmind-legal-liability-gabriel-weil Episode art by Hamish Doodles: hamishdoodles.com

Duration:01:57:30

Ask host to enable sharing for playback control

27 - AI Control with Buck Shlegeris and Ryan Greenblatt

4/11/2024
A lot of work to prevent AI existential risk takes the form of ensuring that AIs don't want to cause harm or take over the world---or in other words, ensuring that they're aligned. In this episode, I talk with Buck Shlegeris and Ryan Greenblatt about a different approach, called "AI control": ensuring that AI systems couldn't take over the world, even if they were trying to. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: 0:00:31 - What is AI control? 0:16:16 - Protocols for AI control 0:22:43 - Which AIs are controllable? 0:29:56 - Preventing dangerous coded AI communication 0:40:42 - Unpredictably uncontrollable AI 0:58:01 - What control looks like 1:08:45 - Is AI control evil? 1:24:42 - Can red teams match misaligned AI? 1:36:51 - How expensive is AI monitoring? 1:52:32 - AI control experiments 2:03:50 - GPT-4's aptitude at inserting backdoors 2:14:50 - How AI control relates to the AI safety field 2:39:25 - How AI control relates to previous Redwood Research work 2:49:16 - How people can work on AI control 2:54:07 - Following Buck and Ryan's research The transcript: axrp.net/episode/2024/04/11/episode-27-ai-control-buck-shlegeris-ryan-greenblatt.html Links for Buck and Ryan: - Buck's twitter/X account: twitter.com/bshlgrs - Ryan on LessWrong: lesswrong.com/users/ryan_greenblatt - You can contact both Buck and Ryan by electronic mail at [firstname] [at-sign] rdwrs.com Main research works we talk about: - The case for ensuring that powerful AIs are controlled: lesswrong.com/posts/kcKrE9mzEHrdqtDpE/the-case-for-ensuring-that-powerful-ais-are-controlled - AI Control: Improving Safety Despite Intentional Subversion: arxiv.org/abs/2312.06942 Other things we mention: - The prototypical catastrophic AI action is getting root access to its datacenter (aka "Hacking the SSH server"): lesswrong.com/posts/BAzCGCys4BkzGDCWR/the-prototypical-catastrophic-ai-action-is-getting-root - Preventing language models from hiding their reasoning: arxiv.org/abs/2310.18512 - Improving the Welfare of AIs: A Nearcasted Proposal: lesswrong.com/posts/F6HSHzKezkh6aoTr2/improving-the-welfare-of-ais-a-nearcasted-proposal - Measuring coding challenge competence with APPS: arxiv.org/abs/2105.09938 - Causal Scrubbing: a method for rigorously testing interpretability hypotheses lesswrong.com/posts/JvZhhzycHu2Yd57RN/causal-scrubbing-a-method-for-rigorously-testing Episode art by Hamish Doodles: hamishdoodles.com

Duration:02:56:05

Ask host to enable sharing for playback control

26 - AI Governance with Elizabeth Seger

11/26/2023
The events of this year have highlighted important questions about the governance of artificial intelligence. For instance, what does it mean to democratize AI? And how should we balance benefits and dangers of open-sourcing powerful AI systems such as large language models? In this episode, I speak with Elizabeth Seger about her research on these questions. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: - 0:00:40 - What kinds of AI? - 0:01:30 - Democratizing AI - 0:04:44 - How people talk about democratizing AI - 0:09:34 - Is democratizing AI important? - 0:13:31 - Links between types of democratization - 0:22:43 - Democratizing profits from AI - 0:27:06 - Democratizing AI governance - 0:29:45 - Normative underpinnings of democratization - 0:44:19 - Open-sourcing AI - 0:50:47 - Risks from open-sourcing - 0:56:07 - Should we make AI too dangerous to open source? - 1:00:33 - Offense-defense balance - 1:03:13 - KataGo as a case study - 1:09:03 - Openness for interpretability research - 1:15:47 - Effectiveness of substitutes for open sourcing - 1:20:49 - Offense-defense balance, part 2 - 1:29:49 - Making open-sourcing safer? - 1:40:37 - AI governance research - 1:41:05 - The state of the field - 1:43:33 - Open questions - 1:49:58 - Distinctive governance issues of x-risk - 1:53:04 - Technical research to help governance - 1:55:23 - Following Elizabeth's research The transcript: https://axrp.net/episode/2023/11/26/episode-26-ai-governance-elizabeth-seger.html Links for Elizabeth: - Personal website: elizabethseger.com - Centre for the Governance of AI (AKA GovAI): governance.ai Main papers: - Democratizing AI: Multiple Meanings, Goals, and Methods: arxiv.org/abs/2303.12642 - Open-sourcing highly capable foundation models: an evaluation of risks, benefits, and alternative methods for pursuing open source objectives: papers.ssrn.com/sol3/papers.cfm?abstract_id=4596436 Other research we discuss: - What Do We Mean When We Talk About "AI democratisation"? (blog post): governance.ai/post/what-do-we-mean-when-we-talk-about-ai-democratisation - Democratic Inputs to AI (OpenAI): openai.com/blog/democratic-inputs-to-ai - Collective Constitutional AI: Aligning a Language Model with Public Input (Anthropic): anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input - Against "Democratizing AI": johanneshimmelreich.net/papers/against-democratizing-AI.pdf - Adversarial Policies Beat Superhuman Go AIs: goattack.far.ai - Structured access: an emerging paradigm for safe AI deployment: arxiv.org/abs/2201.05159 - Universal and Transferable Adversarial Attacks on Aligned Language Models (aka Adversarial Suffixes): arxiv.org/abs/2307.15043 Episode art by Hamish Doodles: hamishdoodles.com

Duration:01:57:13

Ask host to enable sharing for playback control

25 - Cooperative AI with Caspar Oesterheld

10/3/2023
Imagine a world where there are many powerful AI systems, working at cross purposes. You could suppose that different governments use AIs to manage their militaries, or simply that many powerful AIs have their own wills. At any rate, it seems valuable for them to be able to cooperatively work together and minimize pointless conflict. How do we ensure that AIs behave this way - and what do we need to learn about how rational agents interact to make that more clear? In this episode, I'll be speaking with Caspar Oesterheld about some of his research on this very topic. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com Topics we discuss, and timestamps: - 0:00:34 - Cooperative AI - 0:06:21 - Cooperative AI vs standard game theory - 0:19:45 - Do we need cooperative AI if we get alignment? - 0:29:29 - Cooperative AI and agent foundations - 0:34:59 - A Theory of Bounded Inductive Rationality - 0:50:05 - Why it matters - 0:53:55 - How the theory works - 1:01:38 - Relationship to logical inductors - 1:15:56 - How fast does it converge? - 1:19:46 - Non-myopic bounded rational inductive agents? - 1:24:25 - Relationship to game theory - 1:30:39 - Safe Pareto Improvements - 1:30:39 - What they try to solve - 1:36:15 - Alternative solutions - 1:40:46 - How safe Pareto improvements work - 1:51:19 - Will players fight over which safe Pareto improvement to adopt? - 2:06:02 - Relationship to program equilibrium - 2:11:25 - Do safe Pareto improvements break themselves? - 2:15:52 - Similarity-based Cooperation - 2:23:07 - Are similarity-based cooperators overly cliqueish? - 2:27:12 - Sensitivity to noise - 2:29:41 - Training neural nets to do similarity-based cooperation - 2:50:25 - FOCAL, Caspar's research lab - 2:52:52 - How the papers all relate - 2:57:49 - Relationship to functional decision theory - 2:59:45 - Following Caspar's research The transcript: axrp.net/episode/2023/10/03/episode-25-cooperative-ai-caspar-oesterheld.html Links for Caspar: - FOCAL at CMU: www.cs.cmu.edu/~focal/ - Caspar on X, formerly known as Twitter: twitter.com/C_Oesterheld - Caspar's blog: casparoesterheld.com/ - Caspar on Google Scholar: scholar.google.com/citations?user=xeEcRjkAAAAJ&hl=en&oi=ao Research we discuss: - A Theory of Bounded Inductive Rationality: arxiv.org/abs/2307.05068 - Safe Pareto improvements for delegated game playing: link.springer.com/article/10.1007/s10458-022-09574-6 - Similarity-based Cooperation: arxiv.org/abs/2211.14468 - Logical Induction: arxiv.org/abs/1609.03543 - Program Equilibrium: citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=e1a060cda74e0e3493d0d81901a5a796158c8410 - Formalizing Objections against Surrogate Goals: www.alignmentforum.org/posts/K4FrKRTrmyxrw5Dip/formalizing-objections-against-surrogate-goals - Learning with Opponent-Learning Awareness: arxiv.org/abs/1709.04326

Duration:03:02:09

Ask host to enable sharing for playback control

24 - Superalignment with Jan Leike

7/26/2023
Recently, OpenAI made a splash by announcing a new "Superalignment" team. Lead by Jan Leike and Ilya Sutskever, the team would consist of top researchers, attempting to solve alignment for superintelligent AIs in four years by figuring out how to build a trustworthy human-level AI alignment researcher, and then using it to solve the rest of the problem. But what does this plan actually involve? In this episode, I talk to Jan Leike about the plan and the challenges it faces. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com/ Topics we discuss, and timestamps: - 0:00:37 - The superalignment team - 0:02:10 - What's a human-level automated alignment researcher? - 0:06:59 - The gap between human-level automated alignment researchers and superintelligence - 0:18:39 - What does it do? - 0:24:13 - Recursive self-improvement - 0:26:14 - How to make the AI AI alignment researcher - 0:30:09 - Scalable oversight - 0:44:38 - Searching for bad behaviors and internals - 0:54:14 - Deliberately training misaligned models - 1:02:34 - Four year deadline - 1:07:06 - What if it takes longer? - 1:11:38 - The superalignment team and... - 1:11:38 - ... governance - 1:14:37 - ... other OpenAI teams - 1:18:17 - ... other labs - 1:26:10 - Superalignment team logistics - 1:29:17 - Generalization - 1:43:44 - Complementary research - 1:48:29 - Why is Jan optimistic? - 1:58:32 - Long-term agency in LLMs? - 2:02:44 - Do LLMs understand alignment? - 2:06:01 - Following Jan's research The transcript: axrp.net/episode/2023/07/27/episode-24-superalignment-jan-leike.html Links for Jan and OpenAI: - OpenAI jobs: openai.com/careers - Jan's substack: aligned.substack.com - Jan's twitter: twitter.com/janleike Links to research and other writings we discuss: - Introducing Superalignment: openai.com/blog/introducing-superalignment - Let's Verify Step by Step (process-based feedback on math): arxiv.org/abs/2305.20050 - Planning for AGI and beyond: openai.com/blog/planning-for-agi-and-beyond - Self-critiquing models for assisting human evaluators: arxiv.org/abs/2206.05802 - An Interpretability Illusion for BERT: arxiv.org/abs/2104.07143 - Language models can explain neurons in language models https://openaipublic.blob.core.windows.net/neuron-explainer/paper/index.html - Our approach to alignment research: openai.com/blog/our-approach-to-alignment-research - Training language models to follow instructions with human feedback (aka the Instruct-GPT paper): arxiv.org/abs/2203.02155

Duration:02:08:29

Ask host to enable sharing for playback control

23 - Mechanistic Anomaly Detection with Mark Xu

7/26/2023
Is there some way we can detect bad behaviour in our AI system without having to know exactly what it looks like? In this episode, I speak with Mark Xu about mechanistic anomaly detection: a research direction based on the idea of detecting strange things happening in neural networks, in the hope that that will alert us of potential treacherous turns. We both talk about the core problems of relating these mechanistic anomalies to bad behaviour, as well as the paper "Formalizing the presumption of independence", which formulates the problem of formalizing heuristic mathematical reasoning, in the hope that this will let us mathematically define "mechanistic anomalies". Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com/ Topics we discuss, and timestamps: - 0:00:38 - Mechanistic anomaly detection - 0:09:28 - Are all bad things mechanistic anomalies, and vice versa? - 0:18:12 - Are responses to novel situations mechanistic anomalies? - 0:39:19 - Formalizing "for the normal reason, for any reason" - 1:05:22 - How useful is mechanistic anomaly detection? - 1:12:38 - Formalizing the Presumption of Independence - 1:20:05 - Heuristic arguments in physics - 1:27:48 - Difficult domains for heuristic arguments - 1:33:37 - Why not maximum entropy? - 1:44:39 - Adversarial robustness for heuristic arguments - 1:54:05 - Other approaches to defining mechanisms - 1:57:20 - The research plan: progress and next steps - 2:04:13 - Following ARC's research The transcript: axrp.net/episode/2023/07/24/episode-23-mechanistic-anomaly-detection-mark-xu.html ARC links: - Website: alignment.org - Theory blog: alignment.org/blog - Hiring page: alignment.org/hiring Research we discuss: - Formalizing the presumption of independence: arxiv.org/abs/2211.06738 - Eliciting Latent Knowledge (aka ELK): alignmentforum.org/posts/qHCDysDnvhteW7kRd/arc-s-first-technical-report-eliciting-latent-knowledge - Mechanistic Anomaly Detection and ELK: alignmentforum.org/posts/vwt3wKXWaCvqZyF74/mechanistic-anomaly-detection-and-elk - Can we efficiently explain model behaviours? alignmentforum.org/posts/dQvxMZkfgqGitWdkb/can-we-efficiently-explain-model-behaviors - Can we efficiently distinguish different mechanisms? alignmentforum.org/posts/JLyWP2Y9LAruR2gi9/can-we-efficiently-distinguish-different-mechanisms

Duration:02:05:52

Ask host to enable sharing for playback control

Survey, store closing, Patreon

6/28/2023
Very brief survey: bit.ly/axrpsurvey2023 Store is closing in a week! Link: store.axrp.net/ Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast

Duration:00:04:26

Ask host to enable sharing for playback control

22 - Shard Theory with Quintin Pope

6/15/2023
What can we learn about advanced deep learning systems by understanding how humans learn and form values over their lifetimes? Will superhuman AI look like ruthless coherent utility optimization, or more like a mishmash of contextually activated desires? This episode's guest, Quintin Pope, has been thinking about these questions as a leading researcher in the shard theory community. We talk about what shard theory is, what it says about humans and neural networks, and what the implications are for making AI safe. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Episode art by Hamish Doodles: hamishdoodles.com Topics we discuss, and timestamps: - 0:00:42 - Why understand human value formation? - 0:19:59 - Why not design methods to align to arbitrary values? - 0:27:22 - Postulates about human brains - 0:36:20 - Sufficiency of the postulates - 0:44:55 - Reinforcement learning as conditional sampling - 0:48:05 - Compatibility with genetically-influenced behaviour - 1:03:06 - Why deep learning is basically what the brain does - 1:25:17 - Shard theory - 1:38:49 - Shard theory vs expected utility optimizers - 1:54:45 - What shard theory says about human values - 2:05:47 - Does shard theory mean we're doomed? - 2:18:54 - Will nice behaviour generalize? - 2:33:48 - Does alignment generalize farther than capabilities? - 2:42:03 - Are we at the end of machine learning history? - 2:53:09 - Shard theory predictions - 2:59:47 - The shard theory research community - 3:13:45 - Why do shard theorists not work on replicating human childhoods? - 3:25:53 - Following shardy research The transcript: axrp.net/episode/2023/06/15/episode-22-shard-theory-quintin-pope.html Shard theorist links: - Quintin's LessWrong profile: lesswrong.com/users/quintin-pope - Alex Turner's LessWrong profile: lesswrong.com/users/turntrout - Shard theory Discord: discord.gg/AqYkK7wqAG - EleutherAI Discord: discord.gg/eleutherai Research we discuss: - The Shard Theory Sequence: lesswrong.com/s/nyEFg3AuJpdAozmoX - Pretraining Language Models with Human Preferences: arxiv.org/abs/2302.08582 - Inner alignment in salt-starved rats: lesswrong.com/posts/wcNEXDHowiWkRxDNv/inner-alignment-in-salt-starved-rats - Intro to Brain-like AGI Safety Sequence: lesswrong.com/s/HzcM2dkCq7fwXBej8 - Brains and transformers: - The neural architecture of language: Integrative modeling converges on predictive processing: pnas.org/doi/10.1073/pnas.2105646118 - Brains and algorithms partially converge in natural language processing: nature.com/articles/s42003-022-03036-1 - Evidence of a predictive coding hierarchy in the human brain listening to speech: nature.com/articles/s41562-022-01516-2 - Singular learning theory explainer: Neural networks generalize because of this one weird trick: lesswrong.com/posts/fovfuFdpuEwQzJu2w/neural-networks-generalize-because-of-this-one-weird-trick - Singular learning theory links: metauni.org/slt/ - Implicit Regularization via Neural Feature Alignment, aka circles in the parameter-function map: arxiv.org/abs/2008.00938 - The shard theory of human values: lesswrong.com/s/nyEFg3AuJpdAozmoX/p/iCfdcxiyr2Kj8m8mT - Predicting inductive biases of pre-trained networks: openreview.net/forum?id=mNtmhaDkAr - Understanding and controlling a maze-solving policy network, aka the cheese vector: lesswrong.com/posts/cAC4AXiNC5ig6jQnc/understanding-and-controlling-a-maze-solving-policy-network - Quintin's Research agenda: Supervising AIs improving AIs: lesswrong.com/posts/7e5tyFnpzGCdfT4mR/research-agenda-supervising-ais-improving-ais - Steering GPT-2-XL by adding an activation vector: lesswrong.com/posts/5spBue2z2tw4JuDCx/steering-gpt-2-xl-by-adding-an-activation-vector Links for the addendum on mesa-optimization skepticism: - Quintin's response to Yudkowsky arguing against AIs being steerable by gradient descent: ...

Duration:03:28:21

Ask host to enable sharing for playback control

21 - Interpretability for Engineers with Stephen Casper

5/1/2023
Lots of people in the field of machine learning study 'interpretability', developing tools that they say give us useful information about neural networks. But how do we know if meaningful progress is actually being made? What should we want out of these tools? In this episode, I speak to Stephen Casper about these questions, as well as about a benchmark he's co-developed to evaluate whether interpretability tools can find 'Trojan horses' hidden inside neural nets. Patreon: patreon.com/axrpodcast Ko-fi: ko-fi.com/axrpodcast Topics we discuss, and timestamps: - 00:00:42 - Interpretability for engineers - 00:00:42 - Why interpretability? - 00:12:55 - Adversaries and interpretability - 00:24:30 - Scaling interpretability - 00:42:29 - Critiques of the AI safety interpretability community - 00:56:10 - Deceptive alignment and interpretability - 01:09:48 - Benchmarking Interpretability Tools (for Deep Neural Networks) (Using Trojan Discovery) - 01:10:40 - Why Trojans? - 01:14:53 - Which interpretability tools? - 01:28:40 - Trojan generation - 01:38:13 - Evaluation - 01:46:07 - Interpretability for shaping policy - 01:53:55 - Following Casper's work The transcript: axrp.net/episode/2023/05/02/episode-21-interpretability-for-engineers-stephen-casper.html Links for Casper: - Personal website: stephencasper.com/ - Twitter: twitter.com/StephenLCasper - Electronic mail: scasper [at] mit [dot] edu Research we discuss: - The Engineer's Interpretability Sequence: alignmentforum.org/s/a6ne2ve5uturEEQK7 - Benchmarking Interpretability Tools for Deep Neural Networks: arxiv.org/abs/2302.10894 - Adversarial Policies beat Superhuman Go AIs: goattack.far.ai/ - Adversarial Examples Are Not Bugs, They Are Features: arxiv.org/abs/1905.02175 - Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974 - Softmax Linear Units: transformer-circuits.pub/2022/solu/index.html - Red-Teaming the Stable Diffusion Safety Filter: arxiv.org/abs/2210.04610 Episode art by Hamish Doodles: hamishdoodles.com

Duration:01:56:02

Ask host to enable sharing for playback control

20 - 'Reform' AI Alignment with Scott Aaronson

4/12/2023
How should we scientifically think about the impact of AI on human civilization, and whether or not it will doom us all? In this episode, I speak with Scott Aaronson about his views on how to make progress in AI alignment, as well as his work on watermarking the output of language models, and how he moved from a background in quantum complexity theory to working on AI. Note: this episode was recorded before this story (vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says) emerged of a man committing suicide after discussions with a language-model-based chatbot, that included discussion of the possibility of him killing himself. Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Topics we discuss, and timestamps: - 0:00:36 - 'Reform' AI alignment - 0:01:52 - Epistemology of AI risk - 0:20:08 - Immediate problems and existential risk - 0:24:35 - Aligning deceitful AI - 0:30:59 - Stories of AI doom - 0:34:27 - Language models - 0:43:08 - Democratic governance of AI - 0:59:35 - What would change Scott's mind - 1:14:45 - Watermarking language model outputs - 1:41:41 - Watermark key secrecy and backdoor insertion - 1:58:05 - Scott's transition to AI research - 2:03:48 - Theoretical computer science and AI alignment - 2:14:03 - AI alignment and formalizing philosophy - 2:22:04 - How Scott finds AI research - 2:24:53 - Following Scott's research The transcript: axrp.net/episode/2023/04/11/episode-20-reform-ai-alignment-scott-aaronson.html Links to Scott's things: - Personal website: scottaaronson.com - Book, Quantum Computing Since Democritus: amazon.com/Quantum-Computing-since-Democritus-Aaronson/dp/0521199565/ - Blog, Shtetl-Optimized: scottaaronson.blog Writings we discuss: - Reform AI Alignment: scottaaronson.blog/?p=6821 - Planting Undetectable Backdoors in Machine Learning Models: arxiv.org/abs/2204.06974

Duration:02:27:35

Ask host to enable sharing for playback control

Store, Patreon, Video

2/6/2023
Store: https://store.axrp.net/ Patreon: https://www.patreon.com/axrpodcast Ko-fi: https://ko-fi.com/axrpodcast Video: https://www.youtube.com/watch?v=kmPFjpEibu0

Duration:00:02:39

Ask host to enable sharing for playback control

19 - Mechanistic Interpretability with Neel Nanda

2/3/2023
How good are we at understanding the internal computation of advanced machine learning models, and do we have a hope at getting better? In this episode, Neel Nanda talks about the sub-field of mechanistic interpretability research, as well as papers he's contributed to that explore the basics of transformer circuits, induction heads, and grokking. Topics we discuss, and timestamps: - 00:01:05 - What is mechanistic interpretability? - 00:24:16 - Types of AI cognition - 00:54:27 - Automating mechanistic interpretability - 01:11:57 - Summarizing the papers - 01:24:43 - 'A Mathematical Framework for Transformer Circuits' - 01:39:31 - How attention works - 01:49:26 - Composing attention heads - 01:59:42 - Induction heads - 02:11:05 - 'In-context Learning and Induction Heads' - 02:12:55 - The multiplicity of induction heads - 02:30:10 - Lines of evidence - 02:38:47 - Evolution in loss-space - 02:46:19 - Mysteries of in-context learning - 02:50:57 - 'Progress measures for grokking via mechanistic interpretability' - 02:50:57 - How neural nets learn modular addition - 03:11:37 - The suddenness of grokking - 03:34:16 - Relation to other research - 03:43:57 - Could mechanistic interpretability possibly work? - 03:49:28 - Following Neel's research The transcript: axrp.net/episode/2023/02/04/episode-19-mechanistic-interpretability-neel-nanda.html Links to Neel's things: - Neel on Twitter: twitter.com/NeelNanda5 - Neel on the Alignment Forum: alignmentforum.org/users/neel-nanda-1 - Neel's mechanistic interpretability blog: neelnanda.io/mechanistic-interpretability - TransformerLens: github.com/neelnanda-io/TransformerLens - Concrete Steps to Get Started in Transformer Mechanistic Interpretability: alignmentforum.org/posts/9ezkEb9oGvEi6WoB3/concrete-steps-to-get-started-in-transformer-mechanistic - Neel on YouTube: youtube.com/@neelnanda2469 - 200 Concrete Open Problems in Mechanistic Interpretability: alignmentforum.org/s/yivyHaCAmMJ3CqSyj - Comprehesive mechanistic interpretability explainer: dynalist.io/d/n2ZWtnoYHrU1s4vnFSAQ519J Writings we discuss: - A Mathematical Framework for Transformer Circuits: transformer-circuits.pub/2021/framework/index.html - In-context Learning and Induction Heads: transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html - Progress measures for grokking via mechanistic interpretability: arxiv.org/abs/2301.05217 - Hungry Hungry Hippos: Towards Language Modeling with State Space Models (referred to in this episode as the "S4 paper"): arxiv.org/abs/2212.14052 - interpreting GPT: the logit lens: lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens - Locating and Editing Factual Associations in GPT (aka the ROME paper): arxiv.org/abs/2202.05262 - Human-level play in the game of Diplomacy by combining language models with strategic reasoning: science.org/doi/10.1126/science.ade9097 - Causal Scrubbing: alignmentforum.org/s/h95ayYYwMebGEYN5y/p/JvZhhzycHu2Yd57RN - An Interpretability Illusion for BERT: arxiv.org/abs/2104.07143 - Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small: arxiv.org/abs/2211.00593 - Grokking: Generalization Beyond Overfitting on Small Algorithmic Datasets: arxiv.org/abs/2201.02177 - The Effects of Reward Misspecification: Mapping and Mitigating Misaligned Models: arxiv.org/abs/2201.03544 - Collaboration & Credit Principles: colah.github.io/posts/2019-05-Collaboration - Transformer Feed-Forward Layers Are Key-Value Memories: arxiv.org/abs/2012.14913 - Multi-Component Learning and S-Curves: alignmentforum.org/posts/RKDQCB6smLWgs2Mhr/multi-component-learning-and-s-curves - The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: arxiv.org/abs/1803.03635 - Linear Mode Connectivity and the Lottery Ticket Hypothesis: proceedings.mlr.press/v119/frankle20a

Duration:03:52:47

Ask host to enable sharing for playback control

New podcast - The Filan Cabinet

10/13/2022
I have a new podcast, where I interview whoever I want about whatever I want. It's called "The Filan Cabinet", and you can find it wherever you listen to podcasts. The first three episodes are about pandemic preparedness, God, and cryptocurrency. For more details, check out the podcast website (thefilancabinet.com), or search "The Filan Cabinet" in your podcast app.

Duration:00:01:18

Ask host to enable sharing for playback control

18 - Concept Extrapolation with Stuart Armstrong

9/3/2022
Concept extrapolation is the idea of taking concepts an AI has about the world - say, "mass" or "does this picture contain a hot dog" - and extending them sensibly to situations where things are different - like learning that the world works via special relativity, or seeing a picture of a novel sausage-bread combination. For a while, Stuart Armstrong has been thinking about concept extrapolation and how it relates to AI alignment. In this episode, we discuss where his thoughts are at on this topic, what the relationship to AI alignment is, and what the open questions are. Topics we discuss, and timestamps: - 00:00:44 - What is concept extrapolation - 00:15:25 - When is concept extrapolation possible - 00:30:44 - A toy formalism - 00:37:25 - Uniqueness of extrapolations - 00:48:34 - Unity of concept extrapolation methods - 00:53:25 - Concept extrapolation and corrigibility - 00:59:51 - Is concept extrapolation possible? - 01:37:05 - Misunderstandings of Stuart's approach - 01:44:13 - Following Stuart's work The transcript: axrp.net/episode/2022/09/03/episode-18-concept-extrapolation-stuart-armstrong.html Stuart's startup, Aligned AI: aligned-ai.com Research we discuss: - The Concept Extrapolation sequence: alignmentforum.org/s/u9uawicHx7Ng7vwxA - The HappyFaces benchmark: github.com/alignedai/HappyFaces - Goal Misgeneralization in Deep Reinforcement Learning: arxiv.org/abs/2105.14111

Duration:01:46:08

Ask host to enable sharing for playback control

17 - Training for Very High Reliability with Daniel Ziegler

8/21/2022
Sometimes, people talk about making AI systems safe by taking examples where they fail and training them to do well on those. But how can we actually do this well, especially when we can't use a computer program to say what a 'failure' is? In this episode, I speak with Daniel Ziegler about his research group's efforts to try doing this with present-day language models, and what they learned. Listeners beware: this episode contains a spoiler for the Animorphs franchise around minute 41 (in the 'Fanfiction' section of the transcript). Topics we discuss, and timestamps: - 00:00:40 - Summary of the paper - 00:02:23 - Alignment as scalable oversight and catastrophe minimization - 00:08:06 - Novel contribtions - 00:14:20 - Evaluating adversarial robustness - 00:20:26 - Adversary construction - 00:35:14 - The task - 00:38:23 - Fanfiction - 00:42:15 - Estimators to reduce labelling burden - 00:45:39 - Future work - 00:50:12 - About Redwood Research The transcript: axrp.net/episode/2022/08/21/episode-17-training-for-very-high-reliability-daniel-ziegler.html Daniel Ziegler on Google Scholar: scholar.google.com/citations?user=YzfbfDgAAAAJ Research we discuss: - Daniel's paper, Adversarial Training for High-Stakes Reliability: arxiv.org/abs/2205.01663 - Low-stakes alignment: alignmentforum.org/posts/TPan9sQFuPP6jgEJo/low-stakes-alignment - Red Teaming Language Models with Language Models: arxiv.org/abs/2202.03286 - Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472 - Eliciting Latent Knowledge: docs.google.com/document/d/1WwsnJQstPq91_Yh-Ch2XRL8H_EpsnjrC1dwZXR37PC8/edit

Duration:01:00:56

Ask host to enable sharing for playback control

16 - Preparing for Debate AI with Geoffrey Irving

7/1/2022
Many people in the AI alignment space have heard of AI safety via debate - check out AXRP episode 6 (axrp.net/episode/2021/04/08/episode-6-debate-beth-barnes.html) if you need a primer. But how do we get language models to the stage where they can usefully implement debate? In this episode, I talk to Geoffrey Irving about the role of language models in AI safety, as well as three projects he's done that get us closer to making debate happen: using language models to find flaws in themselves, getting language models to back up claims they make with citations, and figuring out how uncertain language models should be about the quality of various answers. Topics we discuss, and timestamps: - 00:00:48 - Status update on AI safety via debate - 00:10:24 - Language models and AI safety - 00:19:34 - Red teaming language models with language models - 00:35:31 - GopherCite - 00:49:10 - Uncertainty Estimation for Language Reward Models - 01:00:26 - Following Geoffrey's work, and working with him The transcript: axrp.net/episode/2022/07/01/episode-16-preparing-for-debate-ai-geoffrey-irving.html Geoffrey's twitter: twitter.com/geoffreyirving Research we discuss: - Red Teaming Language Models With Language Models: arxiv.org/abs/2202.03286 - Teaching Language Models to Support Answers with Verified Quotes, aka GopherCite: arxiv.org/abs/2203.11147 - Uncertainty Estimation for Language Reward Models: arxiv.org/abs/2203.07472 - AI Safety via Debate: arxiv.org/abs/1805.00899 - Writeup: progress on AI safety via debate: lesswrong.com/posts/Br4xDbYu4Frwrb64a/writeup-progress-on-ai-safety-via-debate-1 - Eliciting Latent Knowledge: ai-alignment.com/eliciting-latent-knowledge-f977478608fc - Training Compute-Optimal Large Language Models, aka Chinchilla: arxiv.org/abs/2203.15556

Duration:01:04:44

Ask host to enable sharing for playback control

15 - Natural Abstractions with John Wentworth

5/23/2022
Why does anybody care about natural abstractions? Do they somehow relate to math, or value learning? How do E. coli bacteria find sources of sugar? All these questions and more will be answered in this interview with John Wentworth, where we talk about his research plan of understanding agency via natural abstractions. Topics we discuss, and timestamps: - 00:00:31 - Agency in E. Coli - 00:04:59 - Agency in financial markets - 00:08:44 - Inferring agency in real-world systems - 00:16:11 - Selection theorems - 00:20:22 - Abstraction and natural abstractions - 00:32:42 - Information at a distance - 00:39:20 - Why the natural abstraction hypothesis matters - 00:44:48 - Unnatural abstractions used by humans? - 00:49:11 - Probability, determinism, and abstraction - 00:52:58 - Whence probabilities in deterministic universes? - 01:02:37 - Abstraction and maximum entropy distributions - 01:07:39 - Natural abstractions and impact - 01:08:50 - Learning human values - 01:20:47 - The shape of the research landscape - 01:34:59 - Following John's work The transcript: axrp.net/episode/2022/05/23/episode-15-natural-abstractions-john-wentworth.html John on LessWrong: lesswrong.com/users/johnswentworth Research that we discuss: - Alignment by default - contains the natural abstraction hypothesis: alignmentforum.org/posts/Nwgdq6kHke5LY692J/alignment-by-default#Unsupervised__Natural_Abstractions - The telephone theorem: alignmentforum.org/posts/jJf4FrfiQdDGg7uco/information-at-a-distance-is-mediated-by-deterministic - Generalizing Koopman-Pitman-Darmois: alignmentforum.org/posts/tGCyRQigGoqA4oSRo/generalizing-koopman-pitman-darmois - The plan: alignmentforum.org/posts/3L46WGauGpr7nYubu/the-plan - Understanding deep learning requires rethinking generalization - deep learning can fit random data: arxiv.org/abs/1611.03530 - A closer look at memorization in deep networks - deep learning learns before memorizing: arxiv.org/abs/1706.05394 - Zero-shot coordination: arxiv.org/abs/2003.02979 - A new formalism, method, and open issues for zero-shot coordination: arxiv.org/abs/2106.06613 - Conservative agency via attainable utility preservation: arxiv.org/abs/1902.09725 - Corrigibility: intelligence.org/files/Corrigibility.pdf Errata: - E. coli has ~4,400 genes, not 30,000. - A typical adult human body has thousands of moles of water in it, and therefore must consist of well more than 10 moles total.

Duration:01:36:26