AI-Generated Audio for Planned Obsolescence-logo

AI-Generated Audio for Planned Obsolescence

Technology Podcasts

Audio versions of posts at https://www.planned-obsolescence.org/ Read by AI trained on the author's voice. Please excuse any stiltedness -- it's learning!

Location:

United States

Description:

Audio versions of posts at https://www.planned-obsolescence.org/ Read by AI trained on the author's voice. Please excuse any stiltedness -- it's learning!

Language:

English


Episodes
Ask host to enable sharing for playback control

Scale, schlep, and systems

10/10/2023
This startlingly fast progress in LLMs was driven both by scaling up LLMs and doing schlep to make usable systems out of them. We think scale and schlep will both improve rapidly: planned-obsolescence.org/scale-schlep-and-systems

Duration:00:09:24

Ask host to enable sharing for playback control

Language models surprised us

8/29/2023
Most experts were surprised by progress in language models in 2022 and 2023. There may be more surprises ahead, so experts should register their forecasts now about 2024 and 2025: https://planned-obsolescence.org/language-models-surprised-us

Duration:00:08:36

Ask host to enable sharing for playback control

Could AI accelerate economic growth?

6/6/2023
Most new technologies don’t accelerate the pace of economic growth. But advanced AI might do this by massively increasing the research effort going into developing new technologies.

Duration:00:04:59

Ask host to enable sharing for playback control

The costs of caution

5/1/2023
Both AI fears and AI hopes rest on the belief that it may be possible to build alien minds that can do everything we can do and much more. AI-driven technological progress could save countless lives and make everyone massively healthier and wealthier: https://planned-obsolescence.org/the-costs-of-caution

Duration:00:04:20

Ask host to enable sharing for playback control

Continuous doesn't mean slow

4/12/2023
Once a lab trains AI that can fully replace its human employees, it will be able to multiply its workforce 100,000x. If these AIs do AI research, they could develop vastly superhuman systems in under a year: https://planned-obsolescence.org/continuous-doesnt-mean-slow

Duration:00:03:53

Ask host to enable sharing for playback control

AIs accelerating AI research

4/4/2023
Researchers could potentially design the next generation of ML models more quickly by delegating some work to existing models, creating a feedback loop of ever-accelerating progress. https://planned-obsolescence.org/ais-accelerating-ai-research

Duration:00:05:05

Ask host to enable sharing for playback control

Is it time for a pause?

3/30/2023
The single most important thing we can do is to pause when the next model we train would be powerful enough to obsolete humans entirely. If it were up to me, I would slow down AI development starting now — and then later slow down even more: https://www.planned-obsolescence.org/is-it-time-for-a-pause/

Duration:00:06:36

Ask host to enable sharing for playback control

What we're doing here

3/27/2023
We’re trying to think ahead to a possible future in which AI is making all the most important decisions: https://www.planned-obsolescence.org/what-were-doing-here/

Duration:00:03:54

Ask host to enable sharing for playback control

"Aligned" shouldn't be a synonym for "good"

3/27/2023
Perfect alignment just means that AI systems won’t want to deliberately disregard their designers' intent; it's not enough to ensure AI is good for the world: https://www.planned-obsolescence.org/aligned-vs-good/

Duration:00:06:09

Ask host to enable sharing for playback control

Situational awareness

3/27/2023
AI systems that have a precise understanding of how they’ll be evaluated and what behavior we want them to display will earn more reward than AI systems that don’t: https://www.planned-obsolescence.org/situational-awareness/

Duration:00:07:20

Ask host to enable sharing for playback control

Playing the training game

3/27/2023
We're creating incentives for AI systems to make their behavior look as desirable as possible, while intentionally disregarding human intent when that conflicts with maximizing reward: https://www.planned-obsolescence.org/the-training-game/

Duration:00:08:01

Ask host to enable sharing for playback control

Training AIs to help us align AIs

3/27/2023
If we can accurately recognize good performance on alignment, we could elicit lots of useful alignment work from our models, even if they're playing the training game: https://www.planned-obsolescence.org/training-ais-to-help-us-align-ais/

Duration:00:04:13

Ask host to enable sharing for playback control

Alignment researchers disagree a lot

3/27/2023
Many fellow alignment researchers may be operating under radically different assumptions from you: https://www.planned-obsolescence.org/disagreement-in-alignment/

Duration:00:03:14

Ask host to enable sharing for playback control

The ethics of AI red-teaming

3/27/2023
If we’ve decided we’re collectively fine with unleashing millions of spam bots, then the least we can do is actually study what they can – and can’t – do: https://www.planned-obsolescence.org/ethics-of-red-teaming/

Duration:00:02:21