LessWrong Curated Podcast-logo

LessWrong Curated Podcast

Technology Podcasts

Audio version of the posts shared in the LessWrong Curated newsletter.

Location:

United States

Description:

Audio version of the posts shared in the LessWrong Curated newsletter.

Language:

English


Episodes
Ask host to enable sharing for playback control

[HUMAN VOICE] "How could I have thought that faster?" by mesaoptimizer

4/12/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated This is a linkpost for https://twitter.com/ESYudkowsky/status/144546114693741363 I stumbled upon a Twitter thread where Eliezer describes what seems to be his cognitive algorithm that is equivalent to Tune Your Cognitive Strategies, and have decided to archive / repost it here. Source: https://www.lesswrong.com/posts/rYq6joCrZ8m62m7ej/how-could-i-have-thought-that-faster Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:03:02

Ask host to enable sharing for playback control

[HUMAN VOICE] "My PhD thesis: Algorithmic Bayesian Epistemology" by Eric Neyman

4/12/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated In January, I defended my PhD thesis, which I called Algorithmic Bayesian Epistemology. From the preface: algorithmic Bayesian epistemology: Source: https://www.lesswrong.com/posts/6dd4b4cAWQLDJEuHw/my-phd-thesis-algorithmic-bayesian-epistemology Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:13:07

Ask host to enable sharing for playback control

[HUMAN VOICE] "Toward a Broader Conception of Adverse Selection" by Ricki Heicklen

4/12/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated This is a linkpost for https://bayesshammai.substack.com/p/conditional-on-getting-to-trade-your “I refuse to join any club that would have me as a member” -Marx[1] Adverse Selection is the phenomenon in which information asymmetries in non-cooperative environments make trading dangerous. It has traditionally been understood to describe financial markets in which buyers and sellers systematically differ, such as a market for used cars in which sellers have the information advantage, where resulting feedback loops can lead to market collapses. In this post, I make the case that adverse selection effects appear in many everyday contexts beyond specialized markets or strictly financial exchanges. I argue that modeling many of our decisions as taking place in competitive environments analogous to financial markets will help us notice instances of adverse selection that we otherwise wouldn’t. The strong version of my central thesis is that conditional on getting to trade[2], your trade wasn’t all that great. Any time you make a trade, you should be asking yourself “what do others know that I don’t?” Source: https://www.lesswrong.com/posts/vyAZyYh3qsqcJwwPn/toward-a-broader-conception-of-adverse-selection Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:21:49

Ask host to enable sharing for playback control

[HUMAN VOICE] "On green" by Joe Carlsmith

4/12/2024
Cross-posted from my website. Podcast version here, or search for "Joe Carlsmith Audio" on your podcast app. This essay is part of a series that I'm calling "Otherness and control in the age of AGI." I'm hoping that the individual essays can be read fairly well on their own, but see here for brief summaries of the essays that have been released thus far. Warning: spoilers for Yudkowsky's "The Sword of the Good.") Examining a philosophical vibe that I think contrasts in interesting ways with "deep atheism." Text version here: https://joecarlsmith.com/2024/03/21/on-green This essay is part of a series I'm calling "Otherness and control in the age of AGI." I'm hoping that individual essays can be read fairly well on their own, but see here for brief text summaries of the essays that have been released thus far: https://joecarlsmith.com/2024/01/02/otherness-and-control-in-the-age-of-agi (Though: note that I haven't put the summary post on the podcast yet.) Source: https://www.lesswrong.com/posts/gvNnE6Th594kfdB3z/on-green Narrated by Joe Carlsmith, audio provided with permission.

Duration:01:15:13

Ask host to enable sharing for playback control

LLMs for Alignment Research: a safety priority?

4/6/2024
A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research. This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything. When I try to talk to LLMs about technical AI safety work, however, I just get garbage. I think a useful safety precaution for frontier AI models would be to make them more useful for [...] The original text contained 8 footnotes which were omitted from this narration. --- First published: April 4th, 2024 Source: https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority --- Narrated by TYPE III AUDIO.

Duration:00:20:46

Ask host to enable sharing for playback control

[HUMAN VOICE] "Social status part 1/2: negotiations over object-level preferences" by Steven Byrnes

4/5/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/SPBm67otKq5ET5CWP/social-status-part-1-2-negotiations-over-object-level Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:50:08

Ask host to enable sharing for playback control

[HUMAN VOICE] "Using axis lines for good or evil" by dynomight

4/5/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/Yay8SbQiwErRyDKGb/using-axis-lines-for-good-or-evil Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:12:17

Ask host to enable sharing for playback control

[HUMAN VOICE] "Scale Was All We Needed, At First" by Gabriel Mukobi

4/5/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/xLDwCemt5qvchzgHd/scale-was-all-we-needed-at-first Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:15:04

Ask host to enable sharing for playback control

[HUMAN VOICE] "Acting Wholesomely" by OwenCB

4/5/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/Cb7oajdrA5DsHCqKd/acting-wholesomely Narrated for LessWrong by Perrin Walker. Share feedback on this narration.

Duration:00:27:26

Ask host to enable sharing for playback control

The Story of “I Have Been A Good Bing”

4/1/2024
Rationality is Systematized Winning, so rationalists should win. We’ve tried saving the world from AI, but that's really hard and we’ve had … mixed results. So let's start with something that rationalists should find pretty easy: Becoming Cool! I don’t mean, just, like, riding a motorcycle and breaking hearts level of cool. I mean like the first kid in school to get a Tamagotchi, their dad runs the ice cream truck and gives you free ice cream and, sure, they ride a motorcycle. I mean that kind of feel-it-in-your-bones, I-might-explode-from-envy cool. The eleventh virtue is scholarship, so I hit the ~books~ search engine on this one. Apparently, the aspects of coolness are: I’m afraid that (1) might mess with my calibration, and Lightcone is committed to moving quickly which rules out (3), so I guess that leaves (2). I [...] --- First published: April 1st, 2024 Source: https://www.lesswrong.com/posts/YMo5PuXnZDwRjhHhE/the-story-of-i-have-been-a-good-bing --- Narrated by TYPE III AUDIO.

Duration:00:22:39

Ask host to enable sharing for playback control

The Best Tacit Knowledge Videos on Every Subject

4/1/2024
TL;DR Tacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos—aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I’ll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, George Hotz, and others. What are Tacit Knowledge Videos? Samo Burja claims YouTube has opened the gates for a revolution in tacit knowledge transfer. Burja defines tacit knowledge as follows: Tacit knowledge is knowledge that can’t properly be transmitted via verbal or written instruction, like the ability to create great art or assess a startup. This tacit knowledge is a form of intellectual [...] The original text contained 1 footnote which was omitted from this narration. --- First published: March 31st, 2024 Source: https://www.lesswrong.com/posts/SXJGSPeQWbACveJhs/the-best-tacit-knowledge-videos-on-every-subject --- Narrated by TYPE III AUDIO.

Duration:00:14:44

Ask host to enable sharing for playback control

[HUMAN VOICE] "My Clients, The Liars" by ymeskhout

3/20/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/h99tRkpQGxwtb9Dpv/my-clients-the-liars Narrated for LessWrong by Perrin Walker. Share feedback on this narration. [Curated Post] ✓ [125+ Karma Post] ✓

Duration:00:13:59

Ask host to enable sharing for playback control

[HUMAN VOICE] "Deep atheism and AI risk" by Joe Carlsmith

3/20/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/sJPbmm8Gd34vGYrKd/deep-atheism-and-ai-risk Narrated for LessWrong by Perrin Walker. Share feedback on this narration. [Curated Post] ✓

Duration:00:46:59

Ask host to enable sharing for playback control

[HUMAN VOICE] "CFAR Takeaways: Andrew Critch" by Raemon

3/10/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/Jash4Gbi2wpThzZ4k/cfar-takeaways-andrew-critch Narrated for LessWrong by Perrin Walker. Share feedback on this narration. [Curated Post] ✓ [125+ Karma Post] ✓

Duration:00:09:10

Ask host to enable sharing for playback control

[HUMAN VOICE] "Speaking to Congressional staffers about AI risk" by Akash, hath

3/10/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/2sLwt2cSAag74nsdN/speaking-to-congressional-staffers-about-ai-risk Narrated for LessWrong by Perrin Walker. Share feedback on this narration. [Curated Post] ✓ [125+ Karma Post] ✓

Duration:00:24:14

Ask host to enable sharing for playback control

Many arguments for AI x-risk are wrong

3/9/2024
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.The following is a lightly edited version of a memo I wrote for a retreat. It was inspired by a draft of Counting arguments provide no evidence for AI doom, although the earlier draft contained some additional content. I personally really like the earlier content, and think that my post covers important points not made by the published version of that post. Thankful for the dozens of interesting conversations and comments at the retreat. I think that the AI alignment field is partially founded on fundamentally confused ideas. I’m worried about this because, right now, a range of lobbyists and concerned activists and researchers are in Washington making policy asks. Some of these policy proposals seem to be based on erroneous or unsound arguments.[1] The most important takeaway from this essay is that [...] The original text contained 8 footnotes which were omitted from this narration. --- First published: March 5th, 2024 Source: https://www.lesswrong.com/posts/yQSmcfN4kA7rATHGK/many-arguments-for-ai-x-risk-are-wrong --- Narrated by TYPE III AUDIO.

Duration:00:20:03

Ask host to enable sharing for playback control

Tips for Empirical Alignment Research

3/7/2024
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.TLDR: I’ve collected some tips for research that I’ve given to other people and/or used myself, which have sped things up and helped put people in the right general mindset for empirical AI alignment research. Some of these are opinionated takes, also around what has helped me. Researchers can be successful in different ways, but I still stand by the tips here as a reasonable default. What success generally looks like Here, I’ve included specific criteria that strong collaborators of mine tend to meet, with rough weightings on the importance, as a rough north star for people who collaborate with me (especially if you’re new to research). These criteria are for the specific kind of research I do (highly experimental LLM alignment research, excluding interpretability); some examples of research areas where this applies are e.g. scalable oversight [...] --- First published: February 29th, 2024 Source: https://www.lesswrong.com/posts/dZFpEdKyb9Bf4xYn7/tips-for-empirical-alignment-research --- Narrated by TYPE III AUDIO.

Duration:00:39:53

Ask host to enable sharing for playback control

Timaeus’s First Four Months

2/29/2024
Timaeus was announced in late October 2023, with the mission of making fundamental breakthroughs in technical AI alignment using deep ideas from mathematics and the sciences. This is our first progress update. In service of the mission, our first priority has been to support and contribute to ongoing work in Singular Learning Theory (SLT) and developmental interpretability, with the aim of laying theoretical and empirical foundations for a science of deep learning and neural network interpretability. Our main uncertainties in this research were: The original text contained 1 footnote which was omitted from this narration. --- First published: February 28th, 2024 Source: https://www.lesswrong.com/posts/Quht2AY6A5KNeZFEA/timaeus-s-first-four-months --- Narrated by TYPE III AUDIO.

Duration:00:11:55

Ask host to enable sharing for playback control

Contra Ngo et al. “Every ‘Every Bay Area House Party’ Bay Area House Party”

2/23/2024
This is a linkpost for https://bayesshammai.substack.com/p/contra-ngo-et-al-every-every-bayWith thanks to Scott Alexander for the inspiration, Jeffrey Ladish, Philip Parker, Avital Morris, and Drake Thomas for masterful cohosting, and Richard Ngo for his investigative journalism. Last summer, I threw an Every Bay Area House Party themed party. I don’t live in the Bay, but I was there for a construction-work-slash-webforum-moderation-and-UI-design-slash-grantmaking gig, so I took the opportunity to impose myself on the ever generous Jeffrey Ladish and host a party in his home. Fortunately, the inside of his house is already optimized to look like a parody of a Bay Area house party house, so not much extra decorating was needed, but when has that ever stopped me? Attendees could look through the window for an outside viewRichard Ngo recently covered the event, with only very minor embellishments. I’ve heard rumors that some people are doubting whether the party described truly happened, so [...] --- First published: February 22nd, 2024 Source: https://www.lesswrong.com/posts/mmYFF4dyi8Kg6pWGC/contra-ngo-et-al-every-every-bay-area-house-party-bay-area Linkpost URL: https://bayesshammai.substack.com/p/contra-ngo-et-al-every-every-bay --- Narrated by TYPE III AUDIO.

Duration:00:07:43

Ask host to enable sharing for playback control

[HUMAN VOICE] "And All the Shoggoths Merely Players" by Zack_M_Davis

2/20/2024
Support ongoing human narrations of LessWrong's curated posts: www.patreon.com/LWCurated Source: https://www.lesswrong.com/posts/8yCXeafJo67tYe5L4/and-all-the-shoggoths-merely-players Narrated for LessWrong by Perrin Walker. Share feedback on this narration. [Curated Post] ✓ [125+ Karma Post] ✓

Duration:00:21:40