Mystery AI Hype Theater 3000-logo

Mystery AI Hype Theater 3000

Technology Podcasts

Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.

Location:

United States

Description:

Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.

Language:

English


Episodes
Ask host to enable sharing for playback control

Episode 36: About That 'Dangerous Capabilities' Fanfiction (feat. Ali Alkhatib), June 24 2024

7/19/2024
When is a research paper not a research paper? When a big tech company uses a preprint server as a means to dodge peer review -- in this case, of their wild speculations on the 'dangerous capabilities' of large language models. Ali Alkhatib joins Emily to explain why a recent Google DeepMind document about the hunt for evidence that LLMs might intentionally deceive us was bad science, and yet is still influencing the public conversation about AI. Ali Alkhatib is a computer scientist and former director of the University of San Francisco’s Center for Applied Data Ethics. His research focuses on human-computer interaction, and why our technological problems are really social – and why we should apply social science lenses to data work, algorithmic justice, and even the errors and reality distortions inherent in AI models. References: Google DeepMind paper-like object: Evaluating Frontier Models for Dangerous Capabilities Fresh AI Hell: Hacker tool extracts all the data collected by Windows' 'Recall' AI In NYC, ShotSpotter calls are 87 percent false alarms "AI" system to make callers sound less angry to call center workers Anthropic's Claude Sonnet 3.5 evaluated for "graduate level reasoning" OpenAI's Mira Murati says "AI" will have 'PhD-level' intelligence OpenAI's Mira Murati also says AI will take some creative jobs, maybe they shouldn't have been there to start out with You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:02:00

Ask host to enable sharing for playback control

Episode 35: AI Overviews and Google's AdTech Empire (feat. Safiya Noble), June 10 2024

7/3/2024
You've already heard about the rock-prescribing, glue pizza-suggesting hazards of Google's AI overviews. But the problems with the internet's most-used search engine go way back. UCLA scholar and "Algorithms of Oppression" author Safiya Noble joins Alex and Emily in a conversation about how Google has long been breaking our information ecosystem in the name of shareholders and ad sales. References: Blog post, May 14: Generative AI in Search: Let Google do the searching for you Blog post, May 30: AI Overviews: About last week Algorithms of Oppression: How Search Engines Reinforce Racism, by Safiya Noble Fresh AI Hell: AI Catholic priest demoted after saying it's OK to baptize babies with Gatorade National Archives bans use of ChatGPT ChatGPT better than humans at "Moral Turing Test" Taco Bell as an "AI first" company AGI by 2027, in one hilarious graph You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:01:42

Ask host to enable sharing for playback control

Episode 34: Senate Dot Roadmap Dot Final Dot No Really Dot Docx, June 3 2024

6/20/2024
The politicians are at it again: Senate Majority Leader Chuck Schumer's series of industry-centric forums last year have birthed a "roadmap" for future legislation. Emily and Alex take a deep dive on this report, and conclude that the time spent writing it could have instead been spent...making useful laws. References: Driving US Innovation in Artificial Intelligence: A Roadmap for Artificial Intelligence Policy in the United States Tech Policy Press: US Senate AI Insight Forum Tracker Put the Public in the Driver's Seat: Shadow Report to the US Senate AI Policy Roadmap Emily's opening remarks on “AI in the Workplace: New Crisis or Longstanding Challenge” virtual roundtable Fresh AI Hell: Homophobia in Spotify's chatbot StackOverflow in bed with OpenAI, pushing back against resistance https://scholar.social/@dingemansemark/112411041956275543OpenAI making copyright claim against ChatGPT subreddit Introducing synthetic text for police reports ChatGPT-like "AI" assistant ... as a car feature? Scarlett Johansson vs. OpenAI You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:03:57

Ask host to enable sharing for playback control

Episode 33: Much Ado About 'AI' 'Deception', May 20 2024

6/5/2024
Will the LLMs somehow become so advanced that they learn to lie to us in order to achieve their own ends? It's the stuff of science fiction, and in science fiction these claims should remain. Emily and guest host Margaret Mitchell, machine learning researcher and chief ethics scientist at HuggingFace, break down why 'AI deception' is firmly a feature of human hype. Reference: Patterns: "AI deception: A survey of examples, risks, and potential solutions" Fresh AI Hell: Adobe's 'ethical' image generator is still pulling from copyrighted material Apple advertising hell: vivid depiction of tech crushing creativity, as if it were good "AI is more creative than 99% of people" AI generated employee handbooks causing chaos Bumble CEO: Let AI 'concierge' do your dating for you. Some critique You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:00:30

Ask host to enable sharing for playback control

Episode 32: A Flood of AI Hell, April 29 2024

5/23/2024
AI Hell froze over this winter and now a flood of meltwater threatens to drown Alex and Emily. Armed with raincoats and a hastily-written sea shanty*, they tour the realms, from spills of synthetic information, to the special corner reserved for ShotSpotter. **Lyrics & video on Peertube. *Surveillance:* Public kiosks slurp phone dataWorkplace surveillanceSurveillance by bathroom mirrorStalking-as-a-serviceCops tap everyone else's videosFacial recognition at the doctor's office*Synthetic information spills:* Amazon products called “I cannot fulfill that request”AI-generated obituariesX's Grok treats Twitter trends as newsTouch the button. Touch it.Meta’s chatbot enters private discussionsWHO chatbot makes up medical info*Toxic wish fulfillment:* Fake photos of real memories*ShotSpotter:* ShotSpotter adds surveillance to the over-policedChicago ending ShotSpotter contractBut they're listening anyway*Selling your data:* Reddit sells user dataMeta sharing user DMs with NetflixScraping Discord*AI is always people:* Amazon Fresh3D artGeorge Carlin impressionsThe people behind image selection*TESCREAL corporate capture:* Biden worried about AI because of "Mission: Impossible"Feds appoint AI doomer to run US AI safety instituteAltman & friends will serve on AI safety board*Accountability:* FTC denies facial recognition for age estimationSEC goes after misleading claimsUber Eats courier wins payout over ‘racist’ facial recognition app You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:00:57:48

Ask host to enable sharing for playback control

Episode 31: Science Is a Human Endeavor (feat. Molly Crockett and Lisa Messeri), April 15 2024

5/7/2024
Will AI someday do all our scientific research for us? Not likely. Drs. Molly Crockett and Lisa Messeri join for a takedown of the hype of "self-driving labs" and why such misrepresentations also harm the humans who are vital to scientific research. Dr. Molly Crockett is an associate professor of psychology at Princeton University. Dr. Lisa Messeri is an associate professor of anthropology at Yale University, and author of the new book, In the Land of the Unreal: Virtual and Other Realities in Los Angeles. References: AI For Scientific Discovery - A Workshop Nature: The Nobel Turing Challenge Nobel Turing Challenge Website Eric Schmidt: AI Will Transform Science Molly Crockett & Lisa Messeri in Nature: Artificial intelligence and illusions of understanding in scientific research 404 Media: Is Google's AI actually discovering 'millions of new materials?' Fresh Hell: Yann LeCun realizes generative AI sucks, suggests shift to objective-driven AI In contrast: https://x.com/ylecun/status/1592619400024428544 https://x.com/ylecun/status/1594348928853483520 https://x.com/ylecun/status/1617910073870934019 CBS News: Upselling “AI” mammograms Ars Technica: Rhyming AI clock sometimes lies about the time Ars Technica: Surveillance by M&M's vending machine You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:02:57

Ask host to enable sharing for playback control

Episode 30: Marc's Miserable Manifesto, April 1 2024

4/19/2024
Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life. Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge. References: Marc Andreessen: "The Techno-Optimism Manifesto" First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres) Business Insider: Explaining 'Pronatalism' in Silicon Valley Fresh AI Hell: CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says. The Markup: NYC's AI chatbot tells businesses to break the law TwitterMastodonThe Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in England The Guardian: Wearable AI: Will it put our smartphones out of fashion? TheCurricula.com You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:00:45

Ask host to enable sharing for playback control

Episode 29: How LLMs Are Breaking the News (feat. Karen Hao), March 25 2024

4/3/2024
Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications. References: Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI Platform The Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call? Fresh AI Hell: Alliance for the Future VentureBeat: Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to life Business Insider: A car dealership added an AI chatbot to its site. Then all hell broke loose. More pranks on chatbots You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:00:01:00

Ask host to enable sharing for playback control

Episode 28: LLMs Are Not Human Subjects, March 4 2024

3/13/2024
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data. References: PNAS: ChatGPT outperforms crowd workers for text-annotation tasks Beware the Hype: ChatGPT Didn't Replace Human Data AnnotatorsChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers SayPolitical Analysis: Out of One, Many: Using Language Models to Simulate Human Samples Behavioral Research Methods: Can large language models help augment English psycholinguistic datasets? Information Systems Journal: Editorial: The ethics of using generative AI for qualitative data analysis Fresh AI Hell: Advertising vs. reality, synthetic Willy Wonka edition https://x.com/AlsikkanTV/status/1762235022851948668?s=20https://twitter.com/CultureCrave/status/1762739767471714379https://twitter.com/xriskology/status/1762891492476006491?t=bNQ1AQlju36tQYxnm8BPVQ&s=19A news outlet used an LLM to generate a story...and it falsely quoted Emily AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Trump supporters target Black voters with faked AI images Seeking Reliable Election Information? Don’t Trust AI You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:00:57

Ask host to enable sharing for playback control

Episode 27: Asimov's Laws vs. 'AI' Death-Making (w/ Annalee Newitz & Charlie Jane Anders), February 19 2024

2/29/2024
Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry. Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading. Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother. Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind. They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society. Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines. Watch the video of this episode on PeerTube. References: International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain." DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values." Short version Long version (pdf download) Fresh AI Hell: "I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way." AI generated illustrations in a scientific paper -- rat balls edition. the paper with illustrations of a rat with enormous "testtomcels" has been retracted"[A You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:04:42

Ask host to enable sharing for playback control

Episode 26: Universities Anxiously Buy in to the Hype (feat. Chris Gilliard), February 5 2024

2/15/2024
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety. References: Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed Partnership ASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher Education MLive: Your Classmate Could Be an AI Student at this Michigan University Chris Gilliard: How Ed Tech Is Exploiting Students Fresh AI Hell: Various: “AI learns just like a kid” Infants' gaze teaches AI the nuances of language acquisition Similar from NeuroscienceNews Politico: Psychologist apparently happy with fake version of himself WSJ: Employers Are Offering a New Worker Benefit: Wellness Chatbots NPR: Artificial intelligence can find your location in photos, worrying privacy expert Palette cleanser: Goodbye to NYC's useless robocop. You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:00:59:52

Ask host to enable sharing for playback control

Episode 25: An LLM Says LLMs Can Do Your Job, January 22 2024

2/1/2024
Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math. Visit us on PeerTube for the video of this conversation. References: OpenAI: GPTs are GPTs Goldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic Growth FYI: Over the last 60 years, automation has totally eliminated just one US occupation. Fresh AI Hell: Microsoft adding a dedicated "AI" key to PC keyboards. "Yikes."The AI-led enshittification at Duolingo https://twitter.com/Rahll/status/1744234385891594380https://twitter.com/Maccadaynu/status/1744342930150560056University of Washington Provost highlighting “AI” “Using ChatGPT, My AI eBook Creation Pro helps you write an entire e-book with just three clicks -- no writing or technical experience required.” "Can you add artificial intelligence to the hydraulics?" You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:00:56:29

Ask host to enable sharing for playback control

Episode 24: AI Won't Solve Structural Inequality (feat. Kerry McInerney & Eleanor Drage), January 8 2024

1/17/2024
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions. Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes. Watch the video version of this episode on PeerTube. References: HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and Inclusion Algorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot project Want to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer. Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference” (Drage & McInerney, 2022) Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023) Fresh AI Hell Internet of Shit 2.0: a "smart" bidet Fake AI “students” enrolled at Michigan University Synthetic images destroy online crochet groups “AI” for teacher performance feedback Palette cleanser: “Stochastic parrot” is the American Dialect Society’s AI-related word of the year for 2023! You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:00:08

Ask host to enable sharing for playback control

Episode 23: AI Hell Freezes Over, December 22 2023

1/10/2024
AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS. References: Pentagon moving toward letting AI weapons autonomously kill humans NYC Mayor uses AI to make robocalls in languages he doesn’t speak University of Michigan investing in OpenAI Tesla: claims of “full self-driving” are free speech LLMs may not "understand" output 'Maths-ticated' data LLMs can’t analyze an SEC filing How GPT-4 can be used to create fake datasets Paper thanking GPT-4 concludes LLMs are good for science Will AI Improve Healthcare? Consumers Think So US struggling to regulate AI in healthcare Andrew Ng's low p(doom) Presenting the “Off-Grid AGI Safety Facility” Chess is in the training data DropBox files now shared with OpenAI Underline.io and ‘commercial exploitation’ Axel Springer, OpenAI strike "real-time news" deal Adobe Stock selling AI-generated images of Israel-Hamas conflict Sports Illustrated Published Articles by AI Writers Cruise confirms robotaxis rely on human assistance every 4-5 miles Underage workers training AI, exposed to traumatic content Prisoners training AI in Finland ChatGPT gives better output in response to emotional language - An explanation for bad AI journalism UK judges now permitted to use ChatGPT in legal rulings. Michael Cohen's attorney apparently used generative AI in court petition Brazilian city enacts ordinance secretly written by ChatGPT The lawyers getting fired for using ChatGPT Using sequences of life-events to predict human lives Your palette-cleanser: Is my toddler a stochastic parrot? You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:04:42

Ask host to enable sharing for playback control

Episode 22: Congressional 'AI' Hearings Say More about Lawmakers (feat. Justin Hendrix), December 18 2023

1/3/2024
Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy. Justin Hendrix is editor of the Tech Policy Press. References: TPP tracker for the US Senate 'AI Insight Forum' hearings Balancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily) Hearing charterEmily's opening remarks at virtual roundtable on AI Senate hearing addressing national security implications of AI Video: Rep. Nancy Mace opens hearing with ChatGPT-generated statement. Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk Prediction TPP: Senate Homeland Security Committee Considers Philosophy of AI Alex & Emily's appearance on the Tech Policy Press Podcast Fresh AI Hell: Asylum seekers vs AI-powered translation apps UK officials use AI to decide on issues from benefits to marriage licenses Prior guest Dr. Sarah Myers West testifying on AI concentration You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:00:57:52

Ask host to enable sharing for playback control

Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023

11/30/2023
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency. This episode was recorded on November 20, 2023. Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s the author of the forthcoming book, "Tracing Code." Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He’s a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding. References: Yann LeCun testifies on 'open source' work at Meta Meta launches LLaMA 2 Stanford Human-Centered AI's new transparency index Coverage in The AtlanticEleuther critiqueMargaret Mitchell critiqueOpening up ChatGPT (Andreas Liesenfeld's work) WebinarFresh AI Hell: Sam Altman out at OpenAI The Verge: Meta disbands their Responsible AI team Ars Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homes Call-out of Stability and others' use of “fair use” in AI-generated art A fawning profile of OpenAI's Ilya Sutskever You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:04:08

Ask host to enable sharing for playback control

Episode 20: Let's Do the Time Warp! (to the "Founding" of "Artificial Intelligence"), November 6 2023

11/21/2023
Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all. This episode was recorder on November 6, 2023. Watch the video version on PeerTube. References: "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (1955) Re: methodological individualism, "The Role of General Theory in Comparative-historical Sociology," American Journal of Sociology, 1991 Fresh AI Hell: Silly made-up graph about “intelligence” of AI vs. “intelligence” of AI criticism How AI is perpetuating racism and other bias against Palestinians: The UN hired an AI company with "realistic virtual simulations" of Israel and Palestine WhatsApp's AI sticker generator is feeding users images of Palestinian children holding guns The Guardian on the same issue Instagram 'Sincerely Apologizes' For Inserting 'Terrorist' Into Palestinian Bio Translations Palette cleanser: An AI-powered smoothie shop shut down almost immediately after opening. OpenAI chief scientist: Humans could become 'part AI' in the future A Brief History of Intelligence: Why the evolution of the brain holds the key to the future of AI. AI-centered 'monastic academy':“MAPLE is a community of practitioners exploring the intersection of AI and wisdom.” You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:04:53

Ask host to enable sharing for playback control

Episode 19: The Murky Climate and Environmental Impact of Large Language Models, November 6 2023

11/8/2023
Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story. This episode was recorded on November 6, 2023. References: "The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" The growing energy footprint of artificial intelligence - New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country" "Energy and Policy Considerations for Deep Learning in NLP." "The 'invisible' materiality of information technology." "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning" "AI is dangerous, but not for the reasons you think." Fresh AI Hell: Not the software to blame for deadly Tesla autopilot crash, but the company selling the software. 4chan Uses Bing to Flood the Internet With Racist Images Followup from Vice: Generative AI Is a Disaster, and Companies Don’t Seem to Really Care Is this evidence for LLMs having an internal "world model"? “Approaching a universal Turing machine” Americans Are Asking AI: ‘Should I Get Back With My Ex?’ You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:01:21

Ask host to enable sharing for playback control

Episode 18: Rumors of Artificial General Intelligence Have Been Greatly Exaggerated, October 23 2023

10/31/2023
Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts. References: Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research" "Recoding Gender: Women's Changing Participation in Computing" "The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise" "Is chess the drosophila of artificial intelligence? A social history of an algorithm" "The logic of domains" "Reckoning and Judgment" Fresh AI Hell: Using AI to meet "diversity goals" in modeling AI ushering in a "post-plagiarism" era in writing "Wildly effective and dirt cheap AI therapy." Applying AI to "improve diagnosis for patients with rare diseases." Using LLMs in scientific research Health insurance company Cigna using AI to deny medical claims. AI for your wearable-based workout You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:00:02

Ask host to enable sharing for playback control

Episode 17: Back to School with AI Hype in Education (feat. Haley Lepp), September 22 2023

10/4/2023
Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom. Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use for educational purposes. Haley has worked in many roles in the education technology sector, including curriculum design and NLP engineering. She holds an M.S. in Computational Linguistics from the University of Washington and B.S. in Science, Technology, and International Affairs from Georgetown University. References: University of Michigan debuts 'customized AI services' Al Jazeera: An AI classroom revolution is coming California Teachers Association: The Future of Education? Politico: AI is not just for cheating Extra credit: "Teaching Machines: The History of Personalized Learning" by Audrey Watters Fresh AI Hell: AI generated travel article for Ottawa -- visit the food bank! Microsoft Copilot is “usefully wrong” * Response from Jeff Doctor “Ethical” production of “AI girlfriends” Withdrawn AI-written preprint on millipedes resurfaces, causing alarm among myriapodological community New York Times: How to Tell if Your A.I. Is Conscious * Response from VentureBeat: Today's AI is alchemy. EU on the doomerism You can check out future livestreams at https://twitch.tv/DAIR_Institute. Subscribe to our newsletter via Buttondown. Follow us! Emily https://twitter.com/EmilyMBenderhttps://dair-community.social/@EmilyMBenderhttps://bsky.app/profile/emilymbender.bsky.social Alex https://twitter.com/@alexhannahttps://dair-community.social/@alexhttps://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon. Artwork by Naomi Pleasure-Park. Production by Christie Taylor.

Duration:01:01:54