Concerning AI | Existential Risk From Artificial Intelligence-logo

Concerning AI | Existential Risk From Artificial Intelligence

Philosophy Podcasts

Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?

Location:

United States

Description:

Is there an existential risk from Human-level (and beyond) Artificial Intelligence? If so, what can we do about it?

Language:

English


Episodes
Ask host to enable sharing for playback control

0070: We Don’t Get to Choose

10/23/2018
Or do we? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0070-2018-09-30.mp3

Duration:00:38:42

Ask host to enable sharing for playback control

0069: Will bias get us first?

9/5/2018
Ted interviews Jacob Ward, former editor of Popular Science, journalist at many outlets. Jake’s article about the book he’s writing: Black Box Jake’s website JacobWard.com Implicit bias tests at Harvard We discuss the idea that we’re currently using narrow AIs to inform all kinds of decisions, and that we’re trusting those AIs way more than […]

Duration:00:40:04

Ask host to enable sharing for playback control

0068: Sanityland: More on Assassination Squads

7/23/2018
Sane or insane?

Duration:00:39:10

Ask host to enable sharing for playback control

0067: The OpenAI Charter (and Assassination Squads)

7/6/2018
We love the OpenAI Charter. This episode is an introduction to the document and gets pretty dark. Lots more to come on this topic!

Duration:00:31:24

Ask host to enable sharing for playback control

0066: The AI we have is not the AI we want

5/3/2018
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0066-2018-04-01.mp3

Duration:00:23:54

Ask host to enable sharing for playback control

0065: AGI Fire Alarm

4/19/2018
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3

Duration:00:34:27

Ask host to enable sharing for playback control

0064: AI Go Foom

4/5/2018
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3

Duration:00:40:37

Ask host to enable sharing for playback control

0063: Ted’s Talk

3/26/2018
Ted gave a live talk a few weeks ago.

Duration:00:18:46

Ask host to enable sharing for playback control

0062: There’s No Room at the Top

3/16/2018
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3

Duration:00:41:35

Ask host to enable sharing for playback control

0061: Collapse Will Save Us

3/2/2018
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?

Duration:00:40:40

Ask host to enable sharing for playback control

0060: Peter Scott’s Timeline For Artificial Intelligence Risks

2/13/2018
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and Peter@HumanCusp.com For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3

Duration:00:42:50

Ask host to enable sharing for playback control

0059: Unboxing the Spectre of a Meltdown

1/30/2018
SpectreAttack.com http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0059-2018-01-14.mp3

Duration:00:20:18

Ask host to enable sharing for playback control

0058: Why Disregard the Risks?

1/16/2018
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3

Duration:00:37:55

Ask host to enable sharing for playback control

0057: Waymo is Everybody?

1/2/2018
If the Universe Is Teeming With Aliens, Where is Everybody? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3

Duration:00:18:57

Ask host to enable sharing for playback control

0056: Julia Hu of Lark, an AI Health Coach

12/19/2017
Julia Hu, founder and CEO of Lark, an AI health coach, is our guest this episode. Her tech is really cool and clearly making a positive difference in lots of people's lives right now. Longer term, she doesn't see much to worry about.

Duration:00:46:46

Ask host to enable sharing for playback control

0055: Sean Lane

12/5/2017
Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.

Duration:00:40:37

Ask host to enable sharing for playback control

0054: Predictions of When

11/21/2017
We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]

Duration:00:29:13

Ask host to enable sharing for playback control

0053: Listener Feedback

11/7/2017
Great voice memos from listeners led to interesting conversations.

Duration:00:36:21

Ask host to enable sharing for playback control

0052: Paths to AGI #4: Robots Revisited

10/24/2017
We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3

Duration:00:43:27

Ask host to enable sharing for playback control

0051: Rodney Brooks Says Not To Worry

10/10/2017
Rodney Brooks article: The Seven Deadly Sins of Predicting the Future of AI

Duration:00:40:33