The Nonlinear Library: Alignment Forum Daily
Education Podcasts
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Location:
United States
Genres:
Education Podcasts
Description:
The Nonlinear Library allows you to easily listen to top EA and rationalist content on your podcast player. We use text-to-speech software to create an automatically updating repository of audio content from the EA Forum, Alignment Forum, LessWrong, and other EA blogs. To find out more, please visit us at nonlinear.org
Language:
English
Website:
https://www.nonlinear.org
AF - Meta Questions about Metaphilosophy by Wei Dai
Duration:00:04:42
AF - Red-teaming language models via activation engineering by Nina Rimsky
Duration:00:12:38
AF - Causality and a Cost Semantics for Neural Networks by scottviteri
Duration:00:16:47
AF - "Dirty concepts" in AI alignment discourses, and some guesses for how to deal with them by Nora Ammann
Duration:00:05:36
AF - A Proof of Löb's Theorem using Computability Theory by Jessica Taylor
Duration:00:05:28
AF - Reducing sycophancy and improving honesty via activation steering by NinaR
Duration:00:14:26
AF - How LLMs are and are not myopic by janus
Duration:00:13:24
AF - Open problems in activation engineering by Alex Turner
Duration:00:01:37
AF - QAPR 5: grokking is maybe not that big a deal? by Quintin Pope
Duration:00:16:29
AF - Priorities for the UK Foundation Models Taskforce by Andrea Miotti
Duration:00:09:51
AF - Alignment Grantmaking is Funding-Limited Right Now by johnswentworth
Duration:00:02:26
AF - Measuring and Improving the Faithfulness of Model-Generated Reasoning by Ansh Radhakrishnan
Duration:00:10:16
AF - Using (Uninterpretable) LLMs to Generate Interpretable AI Code by Joar Skalse
Duration:00:05:04
AF - Agency from a causal perspective by Tom Everitt
Duration:00:11:40
AF - Catastrophic Risks from AI #4: Organizational Risks by Dan H
Duration:00:39:24
AF - LLMs Sometimes Generate Purely Negatively-Reinforced Text by Fabien Roger
Duration:00:12:34
AF - Contrast Pairs Drive the Empirical Performance of Contrast Consistent Search (CCS) by Scott Emmons
Duration:00:12:27
AF - PaLM-2 and GPT-4 in "Extrapolating GPT-N performance" by Lukas Finnveden
Duration:00:11:50
AF - Wikipedia as an introduction to the alignment problem by SoerenMind
Duration:00:01:37
AF - [Linkpost] Interpretability Dreams by DanielFilan
Duration:00:03:26