DiscoverMystery AI Hype Theater 3000
Mystery AI Hype Theater 3000
Claim Ownership

Mystery AI Hype Theater 3000

Author: Emily M. Bender and Alex Hanna

Subscribed: 69Played: 693
Share

Description

Artificial Intelligence has too much hype. In this podcast, linguist Emily M. Bender and sociologist Alex Hanna break down the AI hype, separate fact from fiction, and science from bloviation. They're joined by special guests and talk about everything, from machine consciousness to science fiction, to political economy to art made by machines.
30 Episodes
Reverse
Dr. Timnit Gebru guest-hosts with Alex in a deep dive into Marc Andreessen's 2023 manifesto, which argues, loftily, in favor of maximizing the use of 'AI' in all possible spheres of life.Timnit Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR). Prior to that she was fired by Google, where she was serving as co-lead of the Ethical AI research team, in December 2020 for raising issues of discrimination in the workplace. Timnit also co-founded Black in AI, a nonprofit that works to increase the presence, inclusion, visibility and health of Black people in the field of AI, and is on the board of AddisCoder, a nonprofit dedicated to teaching algorithms and computer programming to Ethiopian highschool students, free of charge.References:Marc Andreessen: "The Techno-Optimism Manifesto"First Monday: The TESCREAL bundle: Eugenics and the promise of utopia through artificial general intelligence (Timnit Gebru & Émile Torres)Business Insider: Explaining 'Pronatalism' in Silicon ValleyFresh AI Hell:CBS New York: NYC subway testing out weapons detection technology, Mayor Adams says.The Markup: NYC's AI chatbot tells businesses to break the lawRead Emily's Twitter / Mastodon thread about this chatbot.The Guardian: DrugGPT: New AI tool could help doctors prescribe medicine in EnglandThe Guardian: Wearable AI: Will it put our smartphones out of fashion?TheCurricula.comYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Award-winning AI journalist Karen Hao joins Alex and Emily to talk about why LLMs can't possibly replace the work of reporters -- and why the hype is damaging to already-struggling and necessary publications.References:Adweek: Google Is Paying Publishers to Test an Unreleased Gen AI PlatformThe Quint: AI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Fresh AI Hell:Alliance for the FutureVentureBeat: Google researchers unveil ‘VLOGGER’, an AI that can bring still photos to lifeBusiness Insider: A car dealership added an AI chatbot to its site. Then all hell broke loose.More pranks on chatbotsYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Alex and Emily put on their social scientist hats and take on the churn of research papers suggesting that LLMs could be used to replace human labor in social science research -- or even human subjects. Why these writings are essentially calls to fabricate data.References:PNAS: ChatGPT outperforms crowd workers for text-annotation tasksBeware the Hype: ChatGPT Didn't Replace Human Data AnnotatorsChatGPT Can Replace the Underpaid Workers Who Train AI, Researchers SayPolitical Analysis: Out of One, Many: Using Language Models to Simulate Human SamplesBehavioral Research Methods: Can large language models help augment English psycholinguistic datasets?Information Systems Journal: Editorial: The ethics of using generative AI for qualitative data analysisFresh AI Hell:Advertising vs. reality, synthetic Willy Wonka editionhttps://x.com/AlsikkanTV/status/1762235022851948668?s=20https://twitter.com/CultureCrave/status/1762739767471714379https://twitter.com/xriskology/status/1762891492476006491?t=bNQ1AQlju36tQYxnm8BPVQ&s=19A news outlet used an LLM to generate a story...and it falsely quoted EmilyAI Invents Quote From Real Person in Article by Bihar News Site: A Wake-Up Call?Trump supporters target Black voters with faked AI imagesSeeking Reliable Election Information? Don’t Trust AIYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Science fiction authors and all-around tech thinkers Annalee Newitz and Charlie Jane Anders join this week to talk about Isaac Asimov's oft-cited and equally often misunderstood laws of robotics, as debuted in his short story collection, 'I, Robot.' Meanwhile, both global and US military institutions are declaring interest in 'ethical' frameworks for autonomous weaponry.Plus, in AI Hell, a ballsy scientific diagram heard 'round the world -- and a proposal for the end of books as we know it, from someone who clearly hates reading.Charlie Jane Anders is a science fiction author. Her recent and forthcoming books include Promises Stronger Than Darkness in the ‘Unstoppable’ trilogy, the graphic novel New Mutants: Lethal Legion, and the forthcoming adult novel Prodigal Mother.Annalee Newitz is a science journalist who also writes science fiction. Their most recent novel is The Terraformers, and in June you can look forward to their nonfiction book, Stories Are Weapons: Psychological Warfare and the American Mind.They both co-host the podcast, 'Our Opinions Are Correct', which explores how science fiction is relevant to real life and our present society.Also, some fun news: Emily and Alex are writing a book! Look forward (in spring 2025) to The AI Con, a narrative takedown of the AI bubble and its megaphone-wielding boosters that exposes how tech’s greedy prophets aim to reap windfall profits from the promise of replacing workers with machines.Watch the video of this episode on PeerTube.References:International declaration on "Responsible Military Use of Artificial Intelligence and Autonomy" provides "a normative framework addressing the use of these capabilities in the military domain."DARPA's 'ASIMOV' program to "objectively and quantitatively measure the ethical difficulty of future autonomy use-cases...within the context of military operational values."Short versionLong version (pdf download)Fresh AI Hell:"I think we will stop publishing books, but instead publish “thunks”, which are nuggets of thought that can interact with the “reader” in a dynamic and multimedia way."AI generated illustrations in a scientific paper -- rat balls edition.Per Retraction Watch: the paper with illustrations of a rat with enormous "testtomcels" has been retractedYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Just Tech Fellow Dr. Chris Gilliard aka "Hypervisible" joins Emily and Alex to talk about the wave of universities adopting AI-driven educational technologies, and the lack of protections they offer students in terms of data privacy or even emotional safety.References:Inside Higher Ed: Arizona State Joins ChatGPT in First Higher Ed PartnershipASU press release version: New Collaboration with OpenAI Charts theFuture of AI in Higher EducationMLive: Your Classmate Could Be an AI Student at this Michigan UniversityChris Gilliard: How Ed Tech Is Exploiting StudentsFresh AI Hell:Various: “AI learns just like a kid”Infants' gaze teaches AI the nuances of language acquisitionSimilar from NeuroscienceNewsPolitico: Psychologist apparently happy with fake version of himselfWSJ: Employers Are Offering a New Worker Benefit: Wellness ChatbotsNPR: Artificial intelligence can find your location in photos, worrying privacy expertPalette cleanser: Goodbye to NYC's useless robocop.You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Is ChatGPT really going to take your job? Emily and Alex unpack two hype-tastic papers that make implausible claims about the number of workforce tasks LLMs might make cheaper, faster or easier. And why bad methodology may still trick companies into trying to replace human workers with mathy-math.Visit us on PeerTube for the video of this conversation.References:OpenAI: GPTs are GPTsGoldman Sachs: The Potentially Large Effects of Artificial Intelligence on Economic GrowthFYI: Over the last 60 years, automation has totally eliminated just one US occupation.Fresh AI Hell:Microsoft adding a dedicated "AI" key to PC keyboards.Dr. Damien P Williams: "Yikes."The AI-led enshittification at DuolingoShot: https://twitter.com/Rahll/status/1744234385891594380Chaser: https://twitter.com/Maccadaynu/status/1744342930150560056University of Washington Provost highlighting “AI”“Using ChatGPT, My AI eBook Creation Pro helps you write an entire e-book with just three clicks -- no writing or technical experience required.”"Can you add artificial intelligence to the hydraulics?"You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
New year, same Bullshit Mountain. Alex and Emily are joined by feminist technosolutionism critics Eleanor Drage and Kerry McInerney to tear down the ways AI is proposed as a solution to structural inequality, including racism, ableism, and sexism -- and why this hype can occlude the need for more meaningful changes in institutions.Dr. Eleanor Drage is a Senior Research Fellow at the Leverhulme Centre for the Future of Intelligence. Dr. Kerry McInerney is a Research Fellow at the Leverhulme Centre for the Future of Intelligence and a Research Fellow at the AI Now Institute. Together they host The Good Robot, a podcast about gender, feminism, and whether technology can be "good" in either outcomes or processes.Watch the video version of this episode on PeerTube.References:HireVue promo: How Innovative Hiring Technology Nurtures Diversity, Equity, and InclusionAlgorithm Watch: The [German Federal Asylum Agency]'s controversial dialect recognition software: new languages and an EU pilot projectWant to see how AI might be processing video of your face during a job interview? Play with React App, a tool that Eleanor helped develop to critique AI-powered video interview tools and the 'personality insights' they offer.Philosophy & Technology: Does AI Debias Recruitment? Race, Gender, and AI’s “Eradication of Difference” (Drage & McInerney, 2022)Communication and Critical/Cultural Studies: Copies without an original: the performativity of biometric bordering technologies (Drage & Frabetti, 2023)Fresh AI HellInternet of Shit 2.0: a "smart" bidetFake AI “students” enrolled at Michigan UniversitySynthetic images destroy online crochet groups“AI” for teacher performance feedbackPalette cleanser: “Stochastic parrot” is the American Dialect Society’s AI-related word of the year for 2023!You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
AI Hell has frozen over for a single hour. Alex and Emily visit all seven circles in a tour of the worst in bite-sized BS.References:Pentagon moving toward letting AI weapons autonomously kill humansNYC Mayor uses AI to make robocalls in languages he doesn’t speakUniversity of Michigan investing in OpenAITesla: claims of “full self-driving” are free speechLLMs may not "understand" output'Maths-ticated' dataLLMs can’t analyze an SEC filingHow GPT-4 can be used to create fake datasetsPaper thanking GPT-4 concludes LLMs are good for scienceWill AI Improve Healthcare? Consumers Think SoUS struggling to regulate AI in healthcareAndrew Ng's low p(doom)Presenting the “Off-Grid AGI Safety Facility”Chess is in the training dataDropBox files now shared with OpenAIUnderline.io and ‘commercial exploitation’Axel Springer, OpenAI strike "real-time news" dealAdobe Stock selling AI-generated images of Israel-Hamas conflictSports Illustrated Published Articles by AI WritersCruise confirms robotaxis rely on human assistance every 4-5 milesUnderage workers training AI, exposed to traumatic contentPrisoners training AI in FinlandChatGPT gives better output in response to emotional language- An explanation for bad AI journalismUK judges now permitted to use ChatGPT in legal rulings.Michael Cohen's attorney apparently used generative AI in court petitionBrazilian city enacts ordinance secretly written by ChatGPTThe lawyers getting fired for using ChatGPTUsing sequences of life-events to predict human livesYour palette-cleanser: Is my toddler a stochastic parrot?You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Congress spent 2023 busy with hearings to investigate the capabilities, risks and potential uses of large language models and other 'artificial intelligence' systems. Alex and Emily, plus journalist Justin Hendrix, talk about the limitations of these hearings, the alarmist fixation on so-called 'p(doom)' and overdue laws on data privacy.Justin Hendrix is editor of the Tech Policy Press.References:TPP tracker for the US Senate 'AI Insight Forum' hearingsBalancing Knowledge and Governance: Foundations for Effective Risk Management of AI (featuring Emily)Hearing charterEmily's opening remarks at virtual roundtable on AISenate hearing addressing national security implications of AIVideo: Rep. Nancy Mace opens hearing with ChatGPT-generated statement. Brennan Center report on Department of Homeland Security: Overdue Scrutiny for Watch Listing and Risk PredictionTPP: Senate Homeland Security Committee Considers Philosophy of AIAlex & Emily's appearance on the Tech Policy Press PodcastFresh AI Hell:Asylum seekers vs AI-powered translation appsUK officials use AI to decide on issues from benefits to marriage licensesPrior guest Dr. Sarah Myers West testifying on AI concentrationYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.This episode was recorded on November 20, 2023.Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s the author of the forthcoming book, "Tracing Code."Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He’s a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.References:Yann LeCun testifies on 'open source' work at MetaMeta launches LLaMA 2Stanford Human-Centered AI's new transparency indexCoverage in The AtlanticEleuther critiqueMargaret Mitchell critiqueOpening up ChatGPT (Andreas Liesenfeld's work)WebinarFresh AI Hell:Sam Altman out at OpenAIThe Verge: Meta disbands their Responsible AI teamArs Technica: Lawsuit claims AI with 90 percent error rate forces elderly out of rehab, nursing homesCall-out of Stability and others' use of “fair use” in AI-generated artA fawning profile of OpenAI's Ilya SutskeverYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex time travel back to a conference of men who gathered at Dartmouth College in the summer of 1956 to examine problems relating to computation and "thinking machines," an event commonly mythologized as the founding of the field of artificial intelligence. But our crack team of AI hype detectives is on the case with a close reading of the grant proposal that started it all.This episode was recorder on November 6, 2023. Watch the video version on PeerTube.References:"A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence" (1955)Re: methodological individualism, "The Role of General Theory in Comparative-historical Sociology," American Journal of Sociology, 1991Fresh AI Hell:Silly made-up graph about “intelligence” of AI vs. “intelligence” of AI criticismHow AI is perpetuating racism and other bias against Palestinians:The UN hired an AI company with "realistic virtual simulations" of Israel and PalestineWhatsApp's AI sticker generator is feeding users images of Palestinian children holding gunsThe Guardian on the same issueInstagram 'Sincerely Apologizes' For Inserting 'Terrorist' Into Palestinian Bio TranslationsPalette cleanser: An AI-powered smoothie shop shut down almost immediately after opening.OpenAI chief scientist: Humans could become 'part AI' in the futureA Brief History of Intelligence: Why the evolution of the brain holds the key to the future of AI.AI-centered 'monastic academy':“MAPLE is a community of practitioners exploring the intersection of AI and wisdom.”You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Drs. Emma Strubell and Sasha Luccioni join Emily and Alex for an environment-focused hour of AI hype. How much carbon does a single use of ChatGPT emit? What about the water or energy consumption of manufacturing the graphics processing units that train various large language models? Why even catastrophic estimates from well-meaning researchers may not tell the full story.This episode was recorded on November 6, 2023.References:"The Carbon Footprint of Machine Learning Training Will Plateau, Then Shrink" "The Carbon Emissions of Writing and Illustrating Are Lower for AI than for Humans" The growing energy footprint of artificial intelligence- New York Times coverage: "AI Could Soon Need as Much Electricity as an Entire Country""Energy and Policy Considerations for Deep Learning in NLP." "The 'invisible' materiality of information technology." "Counting Carbon: A Survey of Factors Influencing the Emissions of Machine Learning" "AI is dangerous, but not for the reasons you think." Fresh AI Hell:Not the software to blame for deadly Tesla autopilot crash, but the company selling the software.4chan Uses Bing to Flood the Internet With Racist ImagesFollowup from Vice: Generative AI Is a Disaster, and Companies Don’t Seem to Really CareIs this evidence for LLMs having an internal "world model"?“Approaching a universal Turing machine”Americans Are Asking AI: ‘Should I Get Back With My Ex?’You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex read through Google vice president Blaise Aguera y Arcas' recent proclamation that "artificial general intelligence is already here." Why this claim is a maze of hype and moving goalposts.References:Noema Magazine: "Artificial General Intelligence Is Already Here." "AI and the Everything in the Whole Wide World Benchmark" "Targeting the Benchmark: On Methodology and Current Natural Language Processing Research""Recoding Gender: Women's Changing Participation in Computing""The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise""Is chess the drosophila of artificial intelligence? A social history of an algorithm" "The logic of domains""Reckoning and Judgment"Fresh AI Hell:Using AI to meet "diversity goals" in modelingAI ushering in a "post-plagiarism" era in writing"Wildly effective and dirt cheap AI therapy."Applying AI to "improve diagnosis for patients with rare diseases."Using LLMs in scientific researchHealth insurance company Cigna using AI to deny medical claims.AI for your wearable-based workoutYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex are joined by Stanford PhD student Haley Lepp to examine the increasing hype around LLMs in education spaces - whether they're pitched as ways to reduce teacher workloads, increase accessibility, or simply "democratize learning and knowing" in the Global South. Plus a double dose of devaluating educator expertise and fatalism about the 'inevitability' of LLMs in the classroom.Haley Lepp is a Ph.D. student in the Stanford University Graduate School of Education. She draws on critical data studies, computational social science, and qualitative methods to understand the rise of language technologies and their use for educational purposes. Haley has worked in many roles in the education technology sector, including curriculum design and NLP engineering. She holds an M.S. in Computational Linguistics from the University of Washington and B.S. in Science, Technology, and International Affairs from Georgetown University.References:University of Michigan debuts 'customized AI services'Al Jazeera: An AI classroom revolution is comingCalifornia Teachers Association: The Future of Education?Politico: AI is not just for cheatingExtra credit: "Teaching Machines: The History of Personalized Learning" by Audrey WattersFresh AI Hell:AI generated travel article for Ottawa -- visit the food bank! Microsoft Copilot is “usefully wrong”* Response from Jeff Doctor“Ethical” production of “AI girlfriends”Withdrawn AI-written preprint on millipedes resurfaces, causing alarm among myriapodological communityNew York Times: How to Tell if Your A.I. Is Conscious* Response from VentureBeat: Today's AI is alchemy.EUYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Alex and Emily are taking another stab at Google and other companies' aspirations to be part of the healthcare system - this time with the expertise of Stanford incoming assistant professor of dermatology and biomedical data science Roxana Daneshjou. A look at the gap between medical licensing examination questions and real life, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process.References:Google blog post describing Med-PaLMNature: Large language models encode clinical knowledgePolitico: Microsoft teaming up with Epic Systems to integrate generative AI into electronic medical records softwareMedRXiv: Beyond the hype: large language models propagate race-based medicine (Omiye, Daneshjou, et al)Fresh AI hell:Fake summaries of fake reviewshttps://bsky.app/profile/hypervisible.bsky.social/post/3k4wouet3pg2uSchool administrators asking ChatGPT which books they have to remove from school libraries, given Iowa’s book banMason City Globe Gazette: “Each of these texts was reviewed using AI software to determine if it contains a depiction of a sex act. Based on this review, there are 19 texts that will be removed from our 7-12 school library collections and stored in the Administrative Center while we await further guidance or clarity.”Loquacity and Visible Emotion: ChatGPT as a Policy AdvisorWritten by authors at the Bank of ItalyAI generated school bus routes get students home at 10pmLethal AI generated mushroom-hunting booksHow would RBG respond?You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex tackle the White House hype about the 'voluntary commitments' of companies to limit the harms of their large language models: but only some large language models, and only some, over-hyped kinds of harms.Plus a full portion of Fresh Hell...and a little bit of good news.References:White House press release on voluntary commitmentsEmily’s blog post critiquing the “voluntary commitments”An “AI safety” infused take on regulationAI Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype“AI” Hurts Consumers and Workers — and Isn’t IntelligentFresh AI Hell:Future of Life Institute hijacks SEO for EU's AI ActLLMs for denying health insurance claimsNHS using “AI” as receptionistAutomated robots in receptionCan AI language models replace human research participants?A recipe chatbot taught users how to make chlorine gasUsing a chatbot to pretend to interview Harriet TubmanWorldcoin Orbs & iris scansMartin Shkreli’s AI for health start upAuthors impersonated with fraudulent books on Amazon/GoodreadsGood News:You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex are joined by technology scholar Dr. Lucy Suchman to scrutinize a new book from Henry Kissinger and coauthors Eric Schmidt and Daniel Huttenlocher that declares a new 'Age of AI,' with abundant hype about the capacity of large language models for warmaking. Plus close scrutiny of Palantir's debut of an artificial intelligence platform for combat, and why the company is promising more than the mathy-maths can provide.Dr. Lucy Suchman is a professor emerita of sociology at Lancaster University in the UK. She works at the intersections of anthropology and the field of feminist science and technology studies, focused on cultural imaginaries and material practices of technology design. Her current research extends her longstanding critical engagement with the fields of artificial intelligence and human-computer interaction to the domain of contemporary militarism. She is concerned with the question of whose bodies are incorporated into military systems, how and with what consequences for social justice and the possibility for a less violent world.This episode was recorded on July 21, 2023. Watch the video on PeerTube.References:Wall Street Journal: OpEd derived from 'The Age of AI' (Kissinger, Schmidt & Huttenlocher)American Prospect: Meredith Whittaker & Lucy Suchman’s review of Kissinger et al’s bookVICE: Palantir Demos AI To Fight Wars But Says It Will Be Totally Ethical About It Don't Worry About It Fresh AI Hell:American Psychological Association: how to cite ChatGPThttps://apastyle.apa.org/blog/how-to-cite-chatgptSpam reviews & children’s books:https://twitter.com/millbot/status/1671008061173952512?s=20An analysis we like, comparing AI to the fossil fuel industry:https://hachyderm.io/@dalias/110528154854288688AI Heaven from Dolly Parton:https://consequence.net/2023/07/dolly-parton-ai-hologram-comments/You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Emily and Alex talk to UC Berkeley scholar Hannah Zeavin about the case of the National Eating Disorders Association helpline, which tried to replace human volunteers with a chatbot--and why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable.Content note: This is a conversation that touches on mental health, people in crisis, and exploitation.This episode was originally recorded on June 8, 2023. Watch the video version on PeerTube.Hannah Zeavin is a scholar, writer, and editor whose work centers on the history of human sciences (psychoanalysis, psychology, and psychiatry), the history of technology and media, feminist science and technology studies, and media theory. Zeavin is an Assistant Professor of the History of Science in the Department of History and The Berkeley Center for New Media at UC Berkeley. She is the author of, "The Distance Cure: A History of Teletherapy."References:VICE: Eating Disorder Helpline Fires Staff, Transitions to Chatbot After Unionization… and then pulls the chatbot.NPR: Can an AI chatbot help people with eating disorders as well as another human?Psychiatrist.com: NEDA suspends AI chatbot for giving harmful eating disorder advicePolitico: Suicide hotline shares data with for-profit spinoff, raising ethical questionsDanah Boyd: Crisis Text Line from my perspective.Tech Workers Coalition: Chatbots can't care like we do.Slate: Who's listening when you call a crisis hotline? Helplines and the carceral system.Hannah Zeavin: You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
Take a deep breath and join Alex and Emily in AI Hell itself, as they take down a month's worth of hype in a mere 60 minutes.This episode aired on Friday, May 5, 2023.Watch the video of this episode on PeerTube.References:Terrifying NEJM article on GPT-4 in medicine“Healthcare professionals preferred ChatGPT 79% of the time”Good thoughts from various experts in responseChatGPT supposedly reading dental x-raysChatbots “need” therapistsCEO proposes AI therapist, removes proposal upon realizing there’s regulation:https://twitter.com/BEASTMODE/status/1650013819693944833 (deleted)ChatGPT is more carbon efficient than human writersAsking disinformation machine for confirmation biasGPT-4 glasses to tell you what to say on dates, "Charisma as a Service"Context-aware fill for missing data“Overemployed” with help from ChatGPTPakistani court uses GPT-4 in bail decisionChatGPT in Peruvian and Mexican courtsElon Musk’s deepfake defenseElon Musk's TruthGPTFake interview in German publication revealed as “AI” at the end of the articleYou can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
After a hype-y few weeks of AI happenings, Alex and Emily shovel the BS on GPT-4’s “system card,” its alleged “sparks of Artificial General Intelligence,” and a criti-hype heavy "AI pause" letter. Hint: for a good time, check the citations.This episode originally aired on Friday, April 7, 2023.You can also watch the video of this episode on PeerTube.References:GPT-4 system card: https://cdn.openai.com/papers/gpt-4-system-card.pdf“Sparks of AGI” hype: https://twitter.com/SebastienBubeck/status/1638704164770332674And the preprint from Bubeck et al.: https://arxiv.org/abs/2303.12712“Pause AI” letter: https://futureoflife.org/open-letter/pause-giant-ai-experiments/The “Sparks” paper points to this 1997 editorial in their definition of “intelligence”:https://www1.udel.edu/educ/gottfredson/reprints/1997mainstream.pdfRadiolab's miniseries, 'G': https://radiolab.org/series/radiolab-presents-gBaria and Cross, "The brain is a computer is a brain.": https://arxiv.org/abs/2107.14042Senator Chris Murphy buys the hype:https://twitter.com/ChrisMurphyCT/status/1640186536825061376Generative “AI” is making “police sketches”:https://twitter.com/Wolven/status/1624299508371804161?t=DXyucCPYPAKNn8TtAo0xeg&s=19More mathy math in policing:https://www.cbsnews.com/colorado/news/aurora-police-new-ai-system-bodycam-footage/?utm_source=dlvr.it&utm_medium=twitterUser Research without the Users:https://twitter.com/schock/status/1643392611560878086DoNotPay is here to cancel your gym membership:https://twitter.com/BrianBrackeen/status/1644193519496511488?s=20You can check out future livestreams at https://twitch.tv/DAIR_Institute. Follow us!Emily Twitter: https://twitter.com/EmilyMBender Mastodon: https://dair-community.social/@EmilyMBender Bluesky: https://bsky.app/profile/emilymbender.bsky.social Alex Twitter: https://twitter.com/@alexhanna Mastodon: https://dair-community.social/@alex Bluesky: https://bsky.app/profile/alexhanna.bsky.social Music by Toby Menon.Artwork by Naomi Pleasure-Park. Production by Christie Taylor.
loading
Comments 
Download from Google Play
Download from App Store