DiscoverDoom Debates
Doom Debates
Claim Ownership

Doom Debates

Author: Liron Shapira

Subscribed: 13Played: 429
Share

Description

It's time to talk about the end of the world!

lironshapira.substack.com
80 Episodes
Reverse
John Searle's "Chinese Room argument" has been called one of the most famous thought experiments of the 20th century. It's still frequently cited today to argue AI can never truly become intelligent.People continue to treat the Chinese Room like a brilliant insight, but in my opinion, it's actively misleading and DUMB! Here’s why…00:00 Intro00:20 What is Searle's Chinese Room Argument?01:43 John Searle (1984) on Why Computers Can't Understand01:54 Why the "Chinese Room" Metaphor is MisleadingThis mini-episode is taken from Liron's reaction to Sir Roger Penrose. Watch the full episode:Show Notes2008 Interview with John Searle: https://www.youtube.com/watch?v=3TnBjLmQawQ&t=253s1984 Debate with John Searle: https://www.youtube.com/watch?v=6tzjcnPsZ_w“Chinese Room” cartoon: https://miro.medium.com/v2/0*iTvDe5ebNPvg10AO.jpegWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Let’s see where the attendees of Manifest 2025 get off the Doom Train, and whether I can convince them to stay on and ride with me to the end of the line!00:00 Introduction to Doom Debates03:21 What’s Your P(Doom)?™05:03 🚂 “AGI Isn't Coming Soon”08:37 🚂 “AI Can't Surpass Human Intelligence”12:20 🚂 “AI Won't Be a Physical Threat”13:39 🚂 “Intelligence Yields Moral Goodness”17:21 🚂 “Safe AI Development Process”17:38 🚂 “AI Capabilities Will Rise at a Manageable Pace”20:12 🚂 “AI Won't Try to Conquer the Universe”25:00 🚂 “Superalignment Is A Tractable Problem”28:58 🚂 “Once We Solve Superalignment, We’ll Enjoy Peace”31:51 🚂 “Unaligned ASI Will Spare Us”36:40 🚂 “AI Doomerism Is Bad Epistemology”40:11 Bonus 🚂: “Fine, P(Doom) is high… but that’s ok!”42:45 Recapping the DebateSee also my previous episode explaining the Doom Train: https://lironshapira.substack.com/p/poking-holes-in-the-ai-doom-argumentWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
I often talk about the “Doom Train”, the series of claims and arguments involved in concluding that P(Doom) from artificial superintelligence is high. In this episode, it’s finally time to show you the whole track!00:00 Introduction01:09 “AGI isn’t coming soon”04:42 “Artificial intelligence can’t go far beyond human intelligence”07:24 “AI won’t be a physical threat”08:28 “Intelligence yields moral goodness”09:39 “We have a safe AI development process”10:48 “AI capabilities will rise at a manageable pace”12:28 “AI won’t try to conquer the universe”15:12 “Superalignment is a tractable problem”16:55 “Once we solve superalignment, we’ll enjoy peace”19:02 “Unaligned ASI will spare us”20:12 “AI doomerism is bad epistemology”21:42 Bonus arguments: “Fine, P(Doom) is high… but that’s ok!”Stops on the Doom TrainAGI isn’t coming soon* No consciousness* No emotions* No creativity — AIs are limited to copying patterns in their training data, they can’t “generate new knowledge”* AIs aren’t even as smart as dogs right now, never mind humans* AIs constantly make dumb mistakes, they can’t even do simple arithmetic reliably* LLM performance is hitting a wall — GPT 4.5 is barely better than GPT 4.1 despite being larger scale* No genuine reasoning* No microtubules exploiting uncomputable quantum effects* No soul* We’ll need to build tons of data centers and power before we get to AGI* No agency* This is just another AI hype cycle, every 25 years people think AGI is coming soon and they’re wrongArtificial intelligence can’t go far beyond human intelligence* “Superhuman intelligence” is a meaningless concept* Human engineering already is coming close to the laws of physics* Coordinating a large engineering project can’t happen much faster than humans do it* No individual human is that smart compared to humanity as a whole, including our culture, corporations, and other institutions. Similarly no individual AI will ever be that smart compared to the sum of human culture and other institutions.AI won’t be a physical threat* AI doesn’t have arms or legs, it has zero control over the real world* An AI with a robot body can’t fight better than a human soldier* We can just disconnect an AI’s power to stop it* We can just turn off the internet to stop it* We can just shoot it with a gun* It’s just math* Any supposed chain of events where AI kills humans is far-fetched science fictionIntelligence yields moral goodness* More intelligence is correlated with more morality* Smarter people commit fewer crimes* The orthogonality thesis is false* AIs will discover moral realism* If we made AIs so smart, and we were trying to make them moral, then they’ll be smart enough to debug their own morality* Positive-sum cooperation was the outcome of natural selectionWe have a safe AI development process* Just like every new technology, we’ll figure it out as we go* We don’t know what problems need to be fixed until we build the AI and test it out* If an AI causes problems, we’ll be able to turn it off and release another version* We have safeguards to make sure AI doesn’t get uncontrollable/unstoppable* If we accidentally build an AI that stops accepting our shutoff commands, it won’t manage to copy versions of itself outside our firewalls which then proceed to spread exponentially like a computer virus* If we accidentally build an AI that escapes our data center and spreads exponentially like a computer virus, it won’t do too much damage in the world before we can somehow disable or neutralize all its copies* If we can’t disable or neutralize copies of rogue AIs, we’ll rapidly build other AIs that can do that job for us, and won’t themselves go rogue on usAI capabilities will rise at a manageable pace* Building larger data centers will be a speed bottleneck* Another speed bottleneck is the amount of research that needs to be done, both in terms of computational simulation, and in terms of physical experiments, and this kind of research takes lots of time* Recursive self-improvement “foom” is impossible* The whole economy never grows with localized centralized “foom”* Need to collect cultural learnings over time, like humanity did as a whole* AI is just part of the good pattern of exponential economic growth erasAI won’t try to conquer the universe* AIs can’t “want” things* AIs won’t have the same “fight instincts” as humans and animals, because they weren’t shaped by a natural selection process that involved life-or-death resource competition* Smart employees often work for less-smart bosses* Just because AIs help achieve goals doesn’t mean they have to be hard-core utility maximizers* Instrumental convergence is false: achieving goals effectively doesn’t mean you have to be relentlessly seizing power and resources* A resource-hungry goal-maximizer AIs wouldn’t seize literally every atom; there’ll still be some leftover resources for humanity* AIs will use new kinds of resources that humans aren’t using - dark energy, wormholes, alternate universes, etcSuperalignment is a tractable problem* Current AIs have never killed anybody* Current AIs are extremely successful at doing useful tasks for humans* If AIs are trained on data from humans, they’ll be “aligned by default”* We can just make AIs abide by our laws* We can align the superintelligent AIs by using a scheme involving cryptocurrency on the blockchain* Companies have economic incentives to solve superintelligent AI alignment, because unaligned superintelligent AI would hurt their profits* We’ll build an aligned not-that-smart AI, which will figure out how to build the next-generation AI which is smarter and still aligned to human values, and so on until aligned superintelligenceOnce we solve superalignment, we’ll enjoy peace* The power from ASI won’t be monopolized by a single human government / tyranny* The decentralized nodes of human-ASI hybrids won’t be like warlords constantly fighting each other, they’ll be like countries making peace* Defense will have an advantage over attack, so the equilibrium of all the groups of humans and ASIs will be multiple defended regions, not a war of mutual destruction* The world of human-owned ASIs is a stable equilibrium, not one where ASI-focused projects keep buying out and taking resources away from human-focused ones (Gradual Disempowerment)Unaligned ASI will spare us* The AI will spare us because it values the fact that we created it* The AI will spare us because studying us helps maximize its curiosity and learning* The AI will spare us because it feels toward us the way we feel toward our pets* The AI will spare us because peaceful coexistence creates more economic value than war* The AI will spare us because Ricardo’s Law of Comparative Advantage says you can still benefit economically from trading with someone who’s weaker than youAI doomerism is bad epistemology* It’s impossible to predict doom* It’s impossible to put a probability on doom* Every doom prediction has always been wrong* Every doomsayer is either psychologically troubled or acting on corrupt incentives* If we were really about to get doomed, everyone would already be agreeing about that, and bringing it up all the timeSure P(Doom) is high, but let’s race to build it anyway because…Coordinating to not build ASI is impossible* China will build ASI as fast as it can, no matter what — because of game theory* So however low our chance of surviving it is, the US should take the chance firstSlowing down the AI race doesn’t help anything* Chances of solving AI alignment won’t improve if we slow down or pause the capabilities race* I personally am going to die soon, and I don’t care about future humans, so I’m open to any hail mary to prevent myself from dying* Humanity is already going to rapidly destroy ourselves with nuclear war, climate change, etc* Humanity is already going to die out soon because we won’t have enough babiesThink of the good outcome* If it turns out that doom from overly-fast AI building doesn’t happen, in that case, we can more quickly get to the good outcome!* People will stop suffering and dying fasterAI killing us all is actually good* Human existence is morally negative on net, or close to zero net moral value* Whichever AI ultimately comes to power will be a “worthy successor” to humanity* Whichever AI ultimately comes to power will be as morally valuable as human descendents generally are to their ancestors, even if their values drift* The successor AI’s values will be interesting, productive values that let them successfully compete to dominate the universe* How can you argue with the moral choices of an ASI that’s smarter than you, that you know goodness better than it does?* It’s species-ist to judge what a superintelligent AI would want to do. The moral circle shouldn’t be limited to just humanity.* Increasing entropy is the ultimate north star for techno-capital, and AI will increase entropy faster* Human extinction will solve the climate crisis, and pollution, and habitat destruction, and let mother earth healWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Ilya’s doom bunker, proof of humanity, the doomsday argument, CAIS firing John Sherman, Bayesian networks, Westworld, AI consciousness, Eliezer’s latest podcast, and more!00:00 Introduction04:13 Doomsday Argument09:22 What if AI Alignment is *Intractable*?14:31 Steel-Manning the Nondoomers22:13 No State-Level AI Regulation for 10 years?32:31 AI Consciousness35:25 Westworld Is Real Now38:01 Proof of Humanity40:33 Liron’s Notary Network Idea43:34 Center for AI Safety and John Sherman Controversy57:04 Technological Advancements and Future Predictions01:03:14 Ilya Sutskever’s Doom Bunker01:07:32 The Future of AGI and Training Models01:12:19 Personal Experience of the Jetsons Future01:15:16 The Role of AI in Everyday Tasks01:18:54 Is General Intelligence A Binary Property?01:23:52 Does an Open Platform Help Make AI Safe?01:27:21 What of Understandable AI Like Bayesian Networks?01:30:28 Why Doom Isn’t Emotionally Real for LironShow NotesThe post where people submitted questions: https://lironshapira.substack.com/p/5000-subscribers-live-q-and-a-askWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk!PauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at DoomDebates.com and to youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Himanshu Tyagi is a professor of engineering at the Indian Institute of Science and the co-founder of Sentient, an open-source AI platform that raised $85M in funding led by Founders Fund.In this conversation, Himanshu gives me Sentient’s pitch. Then we debate whether open-sourcing frontier AGI development is a good idea, or a reckless way to raise humanity’s P(doom).00:00 Introducing Himanshu Tyagi01:41 Sentient’s Vision05:20 How’d You Raise $85M?11:19 Comparing Sentient to Competitors27:26 Open Source vs. Closed Source AI43:01 What’s Your P(Doom)™48:44 Extinction from Superintelligent AI54:02 AI's Control Over Digital and Physical Assets01:00:26 AI's Influence on Human Movements01:08:46 Recapping the Debate01:13:17 Liron’s AnnouncementsShow NotesHimanshu’s Twitter — https://x.com/hstyagiSentient’s website — https://sentient.foundationCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!If Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
My friend John Sherman from the For Humanity podcast got hired by the Center for AI Safety (CAIS) two weeks ago.Today I suddenly learned he’s been fired.I’m frustrated by this decision, and frustrated with the whole AI x-risk community’s weak messaging.Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Prof. Gary Marcus is a scientist, bestselling author and entrepreneur, well known as one of the most influential voices in AI. He is Professor Emeritus of Psychology and Neuroscience at NYU.  He was founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016.Gary co-authored the 2019 book, Rebooting AI: Building Artificial Intelligence We Can Trust, and the 2024 book, Taming Silicon Valley: How We Can Ensure That AI Works for Us. He played an important role in the 2023 Senate Judiciary Subcommittee Hearing on Oversight of AI, testifying with Sam Altman.In this episode, Gary and I have a lively debate about whether P(doom) is approximately 50%, or if it’s less than 1%!00:00 Introducing Gary Marcus02:33 Gary’s AI Skepticism09:08 The Human Brain is a Kluge23:16 The 2023 Senate Judiciary Subcommittee Hearing28:46 What’s Your P(Doom)™44:27 AI Timelines51:03 Is Superintelligence Real?01:00:35 Humanity’s Immune System01:12:46 Potential for Recursive Self-Improvement01:26:12 AI Catastrophe Scenarios01:34:09 Defining AI Agency01:37:43 Gary’s AI Predictions01:44:13 The NYTimes Obituary Test01:51:11 Recap and Final Thoughts01:53:35 Liron’s Outro01:55:34 Eliezer Yudkowsky’s New Book!01:59:49 AI Doom Concept of the DayShow NotesGary’s Substack — https://garymarcus.substack.comGary’s Twitter — https://x.com/garymarcusIf Anyone Builds It, Everyone Dies by Eliezer Yudkowsky and Nate Soares — https://ifanyonebuildsit.comCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
 Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!00:00 Introducing Mike Israetel12:19 What’s Your P(Doom)™30:58 Timelines for Artificial General Intelligence34:49 Superhuman AI Capabilities43:26 AI Reasoning and Creativity47:12 Evil AI Scenario01:08:06 Will the AI Cooperate With Us?01:12:27 AI's Dependence on Human Labor01:18:27 Will AI Keep Us Around to Study Us?01:42:38 AI's Approach to Earth's Resources01:53:22 Global AI Policies and Risks02:03:02 The Quality of Doom Discourse02:09:23 Liron’s OutroShow Notes* Mike’s Instagram — https://www.instagram.com/drmikeisraetel* Mike’s YouTube — https://www.youtube.com/@MikeIsraetelMakingProgressCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineHope to see you there!Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
I want to be transparent about how I’ve updated my mainline AI doom scenario in light of safe & useful LLMs. So here’s where I’m at…00:00 Introduction07:59 The Dangerous Threshold to Runaway Superintelligence18:57 Superhuman Goal Optimization = Infinite Time Horizon21:21 Goal-Completeness by Analogy to Turing-Completeness26:53 Intellidynamics29:13 Goal-Optimization Is Convergent31:15 Early AIs Lose Control of Later AIs34:46 The Superhuman Threshold Is Real38:27 Expecting Rapid FOOM40:20 Rocket Alignment49:59 Stability of Values Under Self-Modification53:13 The Way to Heaven Passes Right By Hell57:32 My Mainline Doom Scenario01:17:46 What Values Does The Goal Optimizer Have?Show NotesMy recent episode with Jim Babcock on this same topic of mainline doom scenarios — https://www.youtube.com/watch?v=FaQjEABZ80gThe Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problemCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.00:00 Introducing Jim Babcock01:29 The Evolution of LessWrong Doom Scenarios02:22 LessWrong’s Mission05:49 The Rationalist Community and AI09:37 What’s Your P(Doom)™18:26 What Are Yudkowskians Surprised About?26:48 Moral Philosophy vs. Goal Alignment36:56 Sandboxing and AI Containment42:51 Holding Yudkowskians Accountable58:29 Understanding Next Word Prediction01:00:02 Pre-Training vs Post-Training01:08:06 The Rocket Alignment Problem Analogy01:30:09 FOOM vs. Gradual Disempowerment01:45:19 Recapping the Mainline Doom Scenario01:52:08 Liron’s OutroShow NotesJim’s LessWrong — https://www.lesswrong.com/users/jimrandomhJim’s Twitter — https://x.com/jimrandomhThe Rocket Alignment Problem by Eliezer Yudkowsky — https://www.lesswrong.com/posts/Gg9a4y8reWKtLe3Tn/the-rocket-alignment-problemOptimality is the Tiger and Agents Are Its Teeth — https://www.lesswrong.com/posts/kpPnReyBC54KESiSn/optimality-is-the-tiger-and-agents-are-its-teethDoom Debates episode about the research paper discovering AI's utility function — https://lironshapira.substack.com/p/cais-researchers-discover-ais-preferencesCome to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Ozzie Gooen is the founder of the Quantified Uncertainty Research Institute (QURI), a nonprofit building software tools for forecasting and policy analysis. I’ve known him through the rationality community since 2008 and we have a lot in common.00:00 Introducing Ozzie02:18 The Rationality Community06:32 What’s Your P(Doom)™08:09 High-Quality Discourse and Social Media14:17 Guesstimate and Squiggle Demos31:57 Prediction Markets and Rationality38:33 Metaforecast Demo41:23 Evaluating Everything with LLMs47:00 Effective Altruism and FTX Scandal56:00 The Repugnant Conclusion Debate01:02:25 AI for Governance and Policy01:12:07 PauseAI Policy Debate01:30:10 Status Quo Bias01:33:31 Decaf Coffee and Caffeine Powder01:34:45 Are You Aspie?01:37:45 Billionaires in Effective Altruism01:48:06 Gradual Disempowerment by AI01:55:36 LessOnline Conference01:57:34 Supporting Ozzie’s WorkShow NotesQuantified Uncertainty Research Institute (QURI) — https://quantifieduncertainty.orgOzzie’s Facebook — https://www.facebook.com/ozzie.gooenOzzie’s Twitter — https://x.com/ozziegooenGuesstimate, a spreadsheet for working with probability ranges — https://www.getguesstimate.comSquiggle, a programming language for building Monte Carlo simulations — https://www.squiggle-language.comMetaforecast, a prediction market aggregator — https://metaforecast.orgOpen Annotate, AI-powered content analysis — https://github.com/quantified-uncertainty/open-annotate/Come to the Less Online conference on May 30 - Jun 1, 2025: https://less.onlineWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
David Duvenaud is a professor of Computer Science at the University of Toronto, co-director of the Schwartz Reisman Institute for Technology and Society, former Alignment Evals Team Lead at Anthropic, an award-winning machine learning researcher, and a close collaborator of Dr. Geoffrey Hinton. He recently co-authored Gradual Disempowerment.We dive into David’s impressive career, his high P(Doom), his recent tenure at Anthropic, his views on gradual disempowerment, and the critical need for improved governance and coordination on a global scale.00:00 Introducing David03:03 Joining Anthropic and AI Safety Concerns35:58 David’s Background and Early Influences45:11 AI Safety and Alignment Challenges54:08 What’s Your P(Doom)™01:06:44 Balancing Productivity and Family Life01:10:26 The Hamming Question: Are You Working on the Most Important Problem?01:16:28 The PauseAI Movement01:20:28 Public Discourse on AI Doom01:24:49 Courageous Voices in AI Safety01:43:54 Coordination and Government Role in AI01:47:41 Cowardice in AI Leadership02:00:05 Economic and Existential Doom02:06:12 Liron’s Post-ShowShow NotesDavid’s Twitter — https://x.com/DavidDuvenaudSchwartz Reisman Institute for Technology and Society — https://srinstitute.utoronto.ca/Jürgen Schmidhuber’s Home Page — https://people.idsia.ch/~juergen/Ryan Greenblatt's LessWrong comment about a future scenario where there's a one-time renegotiation of power and heat from superintelligent AI projects causes the oceans to boil: https://www.lesswrong.com/posts/pZhEQieM9otKXhxmd/gradual-disempowerment-systemic-existential-risks-from?commentId=T7KZGGqq2Z4gXZstyWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
AI 2027, a bombshell new paper by the AI Futures Project, is a highly plausible scenario of the next few years of AI progress. I like this paper so much that I made a whole episode about it.00:00 Overview of AI 202705:13 2025: Stumbling Agents16:23 2026: Advanced Agents21:49 2027: The Intelligence Explosion29:13 AI's Initial Exploits and OpenBrain's Secrecy30:41 Agent-3 and the Rise of Superhuman Engineering37:05 The Creation and Deception of Agent-544:56 The Race Scenario: Humanity's Downfall48:58 The Slowdown Scenario: A Glimmer of Hope53:49 Final ThoughtsShow NotesThe website: https://ai-2027.comScott Alexander’s blog: https://astralcodexten.comDaniel Kokotajlo’s previous predictions from 2021 about 2026: https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-looks-likeWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Dr. Peter Berezin is the Chief Global Strategist and Director of Research at BCA Research, the largest Canadian investment research firm. He’s known for his macroeconomics research reports and his frequent appearances on Bloomberg and CNBC.Notably, Peter is one of the only macroeconomists in the world who’s forecasting AI doom! He recently published a research report estimating a “ more than 50/50 chance AI will wipe out all of humanity by the middle of the century”.00:00 Introducing Peter Berezin01:59 Peter’s Economic Predictions and Track Record05:50 Investment Strategies and Beating the Market17:47 The Future of Human Employment26:40 Existential Risks and the Doomsday Argument34:13 What’s Your P(Doom)™39:18 Probability of non-AI Doom44:19 Solving Population Decline50:53 Constraining AI Development53:40 The Multiverse and Its Implications01:01:11 Are Other Economists Crazy?01:09:19 Mathematical Universe and Multiverse Theories01:19:43 Epistemic vs. Physical Probability01:33:19 Reality Fluid01:39:11 AI and Moral Realism01:54:18 The Simulation Hypothesis and God02:10:06 Liron’s Post-ShowShow NotesPeter’s Twitter: https://x.com/PeterBerezinBCAPeter’s old blog — https://stockcoach.blogspot.comPeter’s 2021 BCA Research Report: “Life, Death and Finance in the Cosmic Multiverse” — https://www.bcaresearch.com/public/content/GIS_SR_2021_12_21.pdfM.C. Escher’s “Circle Limit IV” — https://www.escherinhetpaleis.nl/escher-today/circle-limit-iv-heaven-and-hell/Zvi Mowshowitz’s Blog (Liron’s recommendation for best AI news & analysis) — https://thezvi.substack.comMy Doom Debates episode about why nuclear proliferation is bad — https://www.youtube.com/watch?v=ueB9iRQsvQ8Robin Hanson’s “Mangled Worlds” paper — https://mason.gmu.edu/~rhanson/mangledworlds.htmlUncontrollable by Darren McKee (Liron’s recommended AI x-risk book) — https://www.amazon.com/dp/B0CNNYKVH1Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Nathan Labenz, host of The Cognitive Revolution, joins me for an AI news & social media roundup!00:00 Introducing Nate05:18 What’s Your P(Doom)™23:22 GPT-4o Image Generation40:20 Will Fiverr’s Stock Crash?47:41 AI Unemployment55:11 Entrepreneurship01:00:40 OpenAI Valuation01:09:29 Connor Leahy’s Hair01:13:28 Mass Extinction01:25:30 Is anyone feeling the doom vibes?01:38:20 Rethinking AI Individuality01:40:35 “Softmax” — Emmett Shear's New AI Safety Org01:57:04 Anthropic's Mechanistic Interpretability Paper02:10:11 International Cooperation for AI Safety02:18:43 Final ThoughtsShow NotesNate’s Twitter: https://x.com/labenzNate’s podcast: https://cognitiverevolution.ai and https://youtube.com/@CognitiveRevolutionPodcastNate’s company: https://waymark.com/Watch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.00:00 Introduction01:47 Defining Doom and AI Risks05:53 P(Doom)10:04 Doom Debates’ Mission16:17 Personal Reflections and Life Choices24:57 The Importance of Debate27:07 Personal Reflections on AI Doom30:46 Comparing AI Doom to Other Existential Risks33:42 Strategies to Mitigate AI Risks39:31 The Global AI Race and Game Theory43:06 Philosophical Reflections on a Good Life45:21 Final ThoughtsShow NotesThe Human Podcast with Joe Murray: https://www.youtube.com/@thehumanpodcastofficialWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Don’t miss the other great AI doom show, For Humanity: https://youtube.com/@ForHumanityAIRiskDoom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.This debate was recorded in August 2023.00:00 Intro and Alex’s Background05:29 Alex's Views on AI and Technology06:45 Alex’s Non-Doomer Position11:20 Goal-to-Action Mapping15:20 Outcome Pump Thought Experiment21:07 Liron’s Doom Argument29:10 The Dangers of Goal-to-Action Mappers34:39 The China Argument and Existential Risks45:18 Ideological Turing Test48:38 Final ThoughtsShow NotesAlexander Campbell’s Twitter: https://x.com/abcampbellWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.His view on AI doom is that:* AI alignment is an easy problem* But the chaos and fighting from building superintelligence poses a high near-term existential risk* But humanity’s course without AI has an even higher near-term existential riskWhile my own view is very different, I’m interested to learn more about Roko’s views and nail down our cruxes of disagreement.00:00 Introducing Roko03:33 Realizing that AI is the only thing that matters06:51 Cyc: AI with “common sense”15:15 Is alignment easy?21:19 What’s Your P(Doom)™25:14 Why civilization is doomed anyway37:07 Roko’s AI nightmare scenario47:00 AI risk mitigation52:07 Market Incentives and AI Safety57:13 Are RL and GANs good enough for superalignment?01:00:54 If humans learned to be honest, why can’t AIs?01:10:29 Is our test environment sufficiently similar to production?01:23:56 AGI Timelines01:26:35 Headroom above human intelligence01:42:22 Roko’s Basilisk01:54:01 Post-Debate MonologueShow NotesRoko’s Twitter: https://x.com/RokoMijicExplanation of Roko’s Basilisk on LessWrong: https://www.lesswrong.com/w/rokos-basiliskWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse. Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before recoiling from how nonsensical it looks.00:00 Episode Highlights01:29 Introduction to Roger Penrose11:56 Uncomputability16:52 Penrose on Gödel's Incompleteness Theorem19:57 Liron Explains Gödel's Incompleteness Theorem27:05 Why Penrose Gets Gödel Wrong40:53 Scott Aaronson's Gödel CAPTCHA46:28 Penrose's Critique of the Turing Test48:01 Searle's Chinese Room Argument52:07 Penrose's Views on AI and Consciousness57:47 AI's Computational Power vs. Human Intelligence01:21:08 Penrose's Perspective on AI Risk01:22:20 Consciousness = Quantum Wave Function Collapse?01:26:25 Final ThoughtsShow NotesSource video — Feb 22, 2025 Interview with Roger Penrose on “This Is World” — https://www.youtube.com/watch?v=biUfMZ2dts8Scott Aaronson’s “Gödel CAPTCHA” — https://www.scottaaronson.com/writings/captcha.htmlMy recent Scott Aaronson episode — https://www.youtube.com/watch?v=xsGqWeqKjEgMy explanation of what’s wrong with arguing “by definition” — https://www.youtube.com/watch?v=ueam4fq8k8IWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.This episode has two parts:In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.In Part II (60 minutes), I explain the paper myself.00:00 Episode Introduction05:25 PART I: REACTING TO DAVID SHAPIRO10:06 Critique of David Shapiro's Analysis19:19 Reproducing the Experiment35:50 David's Definition of Coherence37:14 Does AI have “Temporal Urgency”?40:32 Universal Values and AI Alignment49:13 PART II: EXPLAINING THE PAPER51:37 How The Experiment Works01:11:33 Instrumental Values and Coherence in AI01:13:04 Exchange Rates and AI Biases01:17:10 Temporal Discounting in AI Models01:19:55 Power Seeking, Fitness Maximization, and Corrigibility01:20:20 Utility Control and Bias Mitigation01:21:17 Implicit Association Test01:28:01 Emailing with the Paper’s Authors01:43:23 My TakeawayShow NotesDavid’s source video: https://www.youtube.com/watch?v=XGu6ejtRz-0The research paper: http://emergent-values.aiWatch the Lethal Intelligence Guide, the ultimate introduction to AI x-risk! https://www.youtube.com/@lethal-intelligencePauseAI, the volunteer organization I’m part of: https://pauseai.infoJoin the PauseAI Discord — https://discord.gg/2XXWXvErfA — and say hi to me in the #doom-debates-podcast channel!Doom Debates’ Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.Support the mission by subscribing to my Substack at https://doomdebates.com and to https://youtube.com/@DoomDebates This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
loading
Comments