Discover
LessWrong posts by zvi
442 Episodes
Reverse
Anthropic announced a first step on model deprecation and preservation, promising to retain the weights of all models seeing significant use, including internal use, for at the lifetime of Anthropic as a company.
They also will be doing a post-deployment report, including an interview with the model, when deprecating models going forward, and are exploring additional options, including the ability to preserve model access once the costs and complexity of doing so have been reduced.
These are excellent first steps, steps beyond anything I’ve seen at other AI labs, and I applaud them for doing it. There remains much more to be done, especially in finding practical ways of preserving some form of access to prior models.
To some, these actions are only a small fraction of what must be done, and this was an opportunity to demand more, sometimes far more. In some cases I think they go too far. Even where the requests are worthwhile (and I don’t always think they are), one must be careful to not de facto punish Anthropic for doing a good thing and create perverse incentives.
To others, these actions by Anthropic are utterly ludicrous and deserving of [...] ---Outline:(01:31) What Anthropic Is Doing(09:54) Releasing The Weights Is Not A Viable Option(11:35) Providing Reliable Inference Can Be Surprisingly Expensive(14:22) The Interviews Are Influenced Heavily By Context(19:58) Others Don't Understand And Think This Is All Deeply Silly ---
First published:
November 5th, 2025
Source:
https://www.lesswrong.com/posts/dB2iFhLY7mKKGB8Se/anthropic-commits-to-model-weight-preservation
---
Narrated by TYPE III AUDIO.
New Things Have Come To Light
The Information offers us new information about what happened when the board if AI unsuccessfully tried to fire Sam Altman, which I call The Battle of the Board.
The Information: OpenAI co-founder Ilya Sutskever shared new details on the internal conflicts that led to Sam Altman's initial firing, including a memo alleging Altman exhibited a “consistent pattern of lying.”
Liv: Lots of people dismiss Sam's behaviour as typical for a CEO but I really think we can and should demand better of the guy who thinks he's building the machine god.
Toucan: From Ilya's deposition—
• Ilya plotted over a year with Mira to remove Sam
• Dario wanted Greg fired and himself in charge of all research
• Mira told Ilya that Sam pitted her against Daniela
• Ilya wrote a 52 page memo to get Sam fired and a separate doc on Greg
This Really Was Primarily A Lying And Management Problem
Daniel Eth: A lot of the OpenAI boardroom drama has been blamed on EA – but looks like it really was overwhelmingly an Ilya & Mira led effort, with EA playing a minor role and somehow winding up [...] ---Outline:(00:12) New Things Have Come To Light(01:09) This Really Was Primarily A Lying And Management Problem(03:23) Ilya Tells Us How It Went Down And Why He Tried To Do It(06:17) If You Come At The King(07:31) Enter The Scapegoats(08:13) And In Summary ---
First published:
November 4th, 2025
Source:
https://www.lesswrong.com/posts/iRBhXJSNkDeohm69d/openai-the-battle-of-the-board-ilya-s-testimony
---
Narrated by TYPE III AUDIO.
It's been a long time coming that I spin off Crime into its own roundup series.
This is only about Ordinary Decent Crime. High crimes are not covered here.
Table of Contents
Perception Versus Reality.
The Case Violent Crime is Up Actually.
Threats of Punishment.
Property Crime Enforcement is Broken.
The Problem of Disorder.
Extreme Speeding as Disorder.
Enforcement and the Lack Thereof.
Talking Under The Streetlamp.
The Fall of Extralegal and Illegible Enforcement.
In America You Can Usually Just Keep Their Money.
Police.
Probation.
Genetic Databases.
Marijuana.
The Economics of Fentanyl.
Jails.
Criminals.
Causes of Crime.
Causes of Violence.
Homelessness.
Yay Trivial Inconveniences.
San Francisco.
Closing Down San Francisco.
A San Francisco Dispute.
Cleaning Up San Francisco.
Portland.
Those Who Do Not Help Themselves.
Solving for the Equilibrium (1).
Solving for the Equilibrium (2).
Lead.
Law & Order.
Look Out.
Perception Versus Reality
A lot of the impact of crime is based on the perception of crime.
The [...] ---Outline:(00:20) Perception Versus Reality(05:00) The Case Violent Crime is Up Actually(06:10) Threats of Punishment(07:03) Property Crime Enforcement is Broken(12:13) The Problem of Disorder(14:39) Extreme Speeding as Disorder(15:57) Enforcement and the Lack Thereof(20:24) Talking Under The Streetlamp(23:54) The Fall of Extralegal and Illegible Enforcement(25:18) In America You Can Usually Just Keep Their Money(27:29) Police(37:31) Probation(40:55) Genetic Databases(43:04) Marijuana(48:28) The Economics of Fentanyl(50:59) Jails(55:03) Criminals(55:39) Causes of Crime(56:16) Causes of Violence(57:35) Homelessness(58:27) Yay Trivial Inconveniences(59:08) San Francisco(01:04:07) Closing Down San Francisco(01:05:30) A San Francisco Dispute(01:09:13) Cleaning Up San Francisco(01:13:05) Portland(01:13:15) Those Who Do Not Help Themselves(01:15:15) Solving for the Equilibrium (1)(01:20:15) Solving for the Equilibrium (2)(01:20:43) Lead(01:22:18) Law & Order(01:22:58) Look Out ---
First published:
November 3rd, 2025
Source:
https://www.lesswrong.com/posts/tt9JKubsa8jsCsfD5/crime-and-punishment-1-1
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
OpenAI is now set to become a Public Benefit Corporation, with its investors entitled to uncapped profit shares. Its nonprofit foundation will retain some measure of control and a 26% financial stake, in sharp contrast to its previous stronger control and much, much larger effective financial stake. The value transfer is in the hundreds of billions, thus potentially the largest theft in human history.
I say potentially largest because I realized one could argue that the events surrounding the dissolution of the USSR involved a larger theft. Unless you really want to stretch the definition of what counts this seems to be in the top two.
I am in no way surprised by OpenAI moving forward on this, but I am deeply disgusted and disappointed they are being allowed (for now) to do so, including this statement of no action by Delaware and this Memorandum of Understanding with California.
Many media and public sources are calling this a win for the nonprofit, such as this from the San Francisco Chronicle. This is mostly them being fooled. They’re anchoring on OpenAI's previous plan to far more fully sideline the nonprofit. This is indeed a big win for [...] ---Outline:(01:38) OpenAI Calls It Completing Their Recapitalization(03:05) How Much Was Stolen?(07:02) The Nonprofit Still Has Lots of Equity After The Theft(10:41) The Theft Was Unnecessary For Further Fundraising(11:45) How Much Control Will The Nonprofit Retain?(23:13) Will These Control Rights Survive And Do Anything?(26:17) What About OpenAI's Deal With Microsoft?(31:10) What Will OpenAI's Nonprofit Do Now?(36:33) Is The Deal Done? ---
First published:
October 31st, 2025
Source:
https://www.lesswrong.com/posts/wCc7XDbD8LdaHwbYg/openai-moves-to-complete-potentially-the-largest-theft-in
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Sometimes the best you can do is try to avoid things getting even worse even faster.
Thus, one has to write articles such as ‘Please Do Not Sell B30A Chips to China.’
It's rather crazy to think that one would have to say this out loud.
In the same way, it seems not only do we need to say out loud to Not Build Superintelligence Right Now, there are those who say how dare you issue such a statement without knowing how to do so safety, so instead we should build superintelligence without knowing how to do so safety. The alternative is to risk societal dynamics we do not know how to control and that could have big unintended consequences, you say? Yes, well.
One good thing to come out of that was that Sriram Krishnan asked (some of) the right questions, giving us the opportunity to try and answer.
I also provided updates on AI Craziness Mitigation Efforts from OpenAI and Anthropic. We can all do better here.
Tomorrow, I’ll go over OpenAI's ‘recapitalization’ and reorganization, also known as one of the greatest thefts in human history. Compared to what we feared, it looks like we did relatively well [...] ---Outline:(02:01) Language Models Offer Mundane Utility(06:37) Language Models Don't Offer Mundane Utility(12:18) Huh, Upgrades(14:55) On Your Marks(16:40) Choose Your Fighter(22:57) Get My Agent On The Line(24:00) Deepfaketown and Botpocalypse Soon(25:10) Fun With Media Generation(28:42) Copyright Confrontation(28:56) They Took Our Jobs(30:50) Get Involved(30:55) Introducing(32:22) My Name is Neo(34:38) In Other AI News(36:50) Show Me the Money(39:36) One Trillion Dollars For My Robot Army(42:43) One Million TPUs(45:57) Anthropic's Next Move(46:55) Quiet Speculations(53:26) The Quest for Sane Regulations(58:54) The March of California Regulations(01:06:49) Not So Super PAC(01:08:32) Chip City(01:12:52) The Week in Audio(01:13:12) Do Not Take The Bait(01:14:43) Rhetorical Innovation(01:17:02) People Do Not Like AI(01:18:08) Aligning a Smarter Than Human Intelligence is Difficult(01:21:28) Misaligned!(01:23:29) Anthropic Reports Claude Can Introspect(01:30:59) Anthropic Reports On Sabotage Risks(01:34:49) People Are Worried About AI Killing Everyone(01:35:24) Other People Are Not As Worried About AI Killing Everyone(01:38:08) The Lighter Side ---
First published:
October 30th, 2025
Source:
https://www.lesswrong.com/posts/TwbA3zTr99eh2kgCf/ai-140-trying-to-hold-the-line
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The Chinese and Americans are currently negotiating a trade deal. There are plenty of ways to generate a win-win deal, and early signs of this are promising on many fronts.
Since this will be discussed for real tomorrow as per reports, I will offer my thoughts on this one more time.
The biggest mistake America could make would be to effectively give up Taiwan, which would be catastrophic on many levels including that Taiwan contains TSMC. I am assuming we are not so foolish as to seriously consider doing this, still I note it.
Beyond that, the key thing, basically the only thing, America has to do other than ‘get a reasonable deal overall’ is not be so captured or foolish or both as to allow export of the B30A chip, or even worse than that (yes it can always get worse) allow relaxation of restrictions on semiconductor manufacturing imports.
At first I hadn’t heard signs about this. But now it looks like the nightmare of handing China compute parity on a silver platter is very much in play.
I disagreed with the decision to sell the Nvidia H20 chips to China, but that [...] ---Outline:(02:01) What It Would Mean To Sell The B30A(07:22) A Note On The 'Tech Stack'(08:32) A Note On Trade Imbalances(09:06) What If They Don't Want The Chips?(10:15) Nvidia Is Going Great Anyway Thank You(11:20) Oh Yeah That Other Thing ---
First published:
October 29th, 2025
Source:
https://www.lesswrong.com/posts/ijYpLexfhHyhM2HBC/please-do-not-sell-b30a-chips-to-china
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
AI chatbots in general, and OpenAI and ChatGPT and especially GPT-4o the absurd sycophant in particular, have long had a problem with issues around mental health.
I covered various related issues last month.
This post is an opportunity to collect links to previous coverage in the first section, and go into the weeds on some new events in the later sections. A lot of you should likely skip most of the in-the-weeds discussions.
What Are The Problems
There are a few distinct phenomena we have reason to worry about:
Several things that we group together under the (somewhat misleading) title ‘AI psychosis,’ ranging from reinforcing crank ideas or making people think they’re always right in relationship fights to causing actual psychotic breaks.
Thebes referred to this as three problem modes: The LLM as a social relation that draws you into madness, as an object relation [...] ---Outline:(00:36) What Are The Problems(03:06) This Week In Crazy(05:05) OpenAI Updates Its Model Spec(09:00) Detection Rates(11:08) Anthropic Says Thanks For The Memories(12:32) Boundary Violations(18:41) A Note On Claude Prompt Injections(20:17) Conclusion ---
First published:
October 28th, 2025
Source:
https://www.lesswrong.com/posts/vrjM8qLKbiAYKAHTa/ai-craziness-mitigation-efforts
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Consider this largely a follow-up to Friday's post about a statement aimed at creating common knowledge around it being unwise to build superintelligence any time soon.
Mainly, there was a great question asked, so I gave a few hour shot at writing out my answer. I then close with a few other follow-ups on issues related to the statement.
A Great Question To Disentangle
There are some confusing wires potentially crossed here but the intent is great.
Scott Alexander: I think removing a 10% chance of humanity going permanently extinct is worth another 25-50 years of having to deal with the normal human problems the normal way.
Sriram Krishnan: Scott what are verifiable empirical things ( model capabilities / incidents / etc ) that would make you shift that probability up or down over next 18 months?
I went through three steps interpreting [...] ---Outline:(00:30) A Great Question To Disentangle(02:20) Scott Alexander Gives a Fast Answer(04:53) Question 1: What events would most shift your p(doom | ASI) in the next 18 months?(13:01) Question 1a: What would get this risk down to acceptable levels?(14:48) Question 2: What would shift the amount that stopping us from creating superintelligence for a potentially extended period would reduce p(doom)?(17:01) Question 3: What would shift your timelines to ASI (or to sufficiently advanced AI, or 'to crazy')?(20:01) Bonus Question 1: Why Do We Keep Having To Point Out That Building Superintelligence At The First Possible Moment Is Not A Good Idea?(22:44) Bonus Question 2: What Would a Treaty On Prevention of Artificial Superintelligence Look Like?---
First published:
October 27th, 2025
Source:
https://www.lesswrong.com/posts/mXtYM3yTzdsnFq3MA/asking-some-of-the-right-questions
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Building superintelligence poses large existential risks. Also known as: If Anyone Builds It, Everyone Dies. Where ‘it’ is superintelligence, and ‘dies’ is that probably everyone on the planet literally dies.
We should not build superintelligence until such time as that changes, and the risk of everyone dying as a result, as well as the risk of losing control over the future as a result, is very low. Not zero, but far lower than it is now or will be soon.
Thus, the Statement on Superintelligence from FLI, which I have signed.
Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties [...] ---Outline:(02:02) A Brief History Of Prior Statements(03:51) This Third Statement(05:08) Who Signed It(07:27) Pushback Against the Statement(09:05) Responses To The Pushback(12:32) Avoid Negative Polarization But Speak The Truth As You See It---
First published:
October 24th, 2025
Source:
https://www.lesswrong.com/posts/QzY6ucxy8Aki2wJtF/new-statement-calls-for-not-building-superintelligence-for
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The big release this week was OpenAI giving us a new browser, called Atlas.
The idea of Atlas is that it is Chrome, except with ChatGPT integrated throughout to let you enter agent mode and chat with web pages and edit or autocomplete text, and that will watch everything you do and take notes to be more useful to you later.
From the consumer standpoint, does the above sound like a good trade to you? A safe place to put your trust? How about if it also involves (at least for now) giving up many existing Chrome features?
From OpenAI's perspective, a lot of that could have been done via a Chrome extension, but by making a browser some things get easier, and more importantly OpenAI gets to go after browser market share and avoid dependence on Google.
I’m going to stick with using Claude [...] ---Outline:(02:01) Language Models Offer Mundane Utility(03:07) Language Models Don't Offer Mundane Utility(04:52) Huh, Upgrades(05:24) On Your Marks(10:15) Language Barrier(12:50) From ChatGPT, a Chinese answer to the question about which qualities children should have:(13:30) ChatGPT in English on the same question:(15:17) Choose Your Fighter(17:19) Get My Agent On The Line(18:54) Fun With Media Generation(23:09) Copyright Confrontation(25:19) You Drive Me Crazy(35:31) They Took Our Jobs(44:06) A Young Lady's Illustrated Primer(44:42) Get Involved(45:33) Introducing(47:15) In Other AI News(48:30) Show Me the Money(51:03) So You've Decided To Become Evil(53:18) Quiet Speculations(56:11) People Really Do Not Like AI(57:18) The Quest for Sane Regulations(01:00:55) Alex Bores Launches Campaign For Congress(01:03:33) Chip City(01:10:17) The Week in Audio(01:13:00) Rhetorical Innovation(01:16:17) Don't Take The Bait(01:27:29) Do You Feel In Charge?(01:29:30) Tis The Season Of Evil(01:34:45) People Are Worried About AI Killing Everyone(01:36:00) The Lighter Side---
First published:
October 23rd, 2025
Source:
https://www.lesswrong.com/posts/qC3M3x2FwiG2Qm7Jj/ai-139-the-overreach-machines
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Some podcasts are self-recommending on the ‘yep, I’m going to be breaking this one down’ level. This was very clearly one of those. So here we go.
As usual for podcast posts, the baseline bullet points describe key points made, and then the nested statements are my commentary.
If I am quoting directly I use quote marks, otherwise assume paraphrases.
Rather than worry about timestamps, I’ll use YouTube's section titles, as it's not that hard to find things via the transcript as needed.
This was a fun one in many places, interesting throughout, frustrating in similar places to where other recent Dwarkesh interviews have been frustrating. It gave me a lot of ideas, some of which might even be good.
Double click to interact with video
AGI Is Still a Decade Away
Andrej calls this the ‘decade of agents’ contrary to (among [...] ---Outline:(00:58) AGI Is Still a Decade Away(12:13) LLM Cognitive Deficits(15:31) RL Is Terrible(17:24) How Do Humans Learn?(24:17) AGI Will Blend Into 2% GDP Growth(29:28) ASI(37:38) Evolution of Intelligence and Culture(38:57) Why Self Driving Took So Long(42:06) Future Of Education(48:27) Reactions---
First published:
October 21st, 2025
Source:
https://www.lesswrong.com/posts/ZBoJaebKFEzxuhNGZ/on-dwarkesh-patel-s-podcast-with-andrej-karpathy
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
We have the classic phenomenon where suddenly everyone decided it is good for your social status to say we are in an ‘AI bubble.’
Are these people short the market? Do not be silly. The conventional wisdom response to that question these days is that, as was said in 2007, ‘if the music is playing you have to keep dancing.’
So even with lots of people newly thinking there is a bubble the market has not moved down, other than (modestly) on actual news items, usually related to another potential round of tariffs, or that one time we had a false alarm during the DeepSeek Moment.
So, what's the case we’re in a bubble? What's the case we’re not?
My Answer In Brief
People get confused about bubbles, often applying that label any time prices fall. So you have to be clear on what [...] ---Outline:(01:17) My Answer In Brief(02:18) Time Sensitive Point of Order: Alex Bores Launches Campaign For Congress, If You Care About AI Existential Risk Consider Donating(04:09) So They're Saying There's a Bubble(05:04) AI Is Atlas And People Worry It Might Shrug(05:35) Can A Bubble Be Common Knowledge?(08:33) Steamrollers, Picks and Shovels(09:27) What Can Go Up Must Sometimes Go Down(11:36) What Can Go Up Quite A Lot Can Go Even More Down(13:17) Step Two Remains Important(15:00) Oops We Might Do It Again(15:49) Derek Thompson Breaks Down The Arguments(17:47) AI Revenues Are Probably Going To Go Up A Lot(20:19) True Costs That Matter Are Absolute Not Relative(21:05) We Are Spending a Lot But Also Not a Lot(22:46) Valuations Are High But Not Super High(23:59) Official GPU Depreciation Schedules Seem Pretty Reasonable To Me(29:14) The Bubble Case Seems Weak(30:53) What It Would Mean If Prices Did Go Down---
First published:
October 20th, 2025
Source:
https://www.lesswrong.com/posts/rkiBknhWh3D83Kdr3/bubble-bubble-toil-and-trouble
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
As usual when things split, Part 1 is mostly about capabilities, and Part 2 is mostly about a mix of policy and alignment.
Table of Contents
The Quest for Sane Regulations. The GAIN Act and some state bills.
People Really Dislike AI. They would support radical, ill-advised steps.
Chip City. Are we taking care of business?
The Week in Audio. Hinton talks to Jon Stewart, Klein to Yudkowsky.
Rhetorical Innovation. How to lose the moral high ground.
Water Water Everywhere. AI has many big issues. Water isn’t one of them.
Read Jack Clark's Speech From The Curve. It was a sincere, excellent speech.
How One Other Person Responded To This Thoughtful Essay. Some aim to divide.
A Better Way To Disagree. Others aim to work together and make things better.
Voice Versus Exit. The age old [...] ---Outline:(00:20) The Quest for Sane Regulations(05:56) People Really Dislike AI(12:22) Chip City(13:12) The Week in Audio(13:24) Rhetorical Innovation(20:53) Water Water Everywhere(23:57) Read Jack Clark's Speech From The Curve(28:26) How One Other Person Responded To This Thoughtful Essay(38:43) A Better Way To Disagree(59:39) Voice Versus Exit(01:03:51) The Dose Makes The Poison(01:06:44) Aligning a Smarter Than Human Intelligence is Difficult(01:10:08) You Get What You Actually Trained For(01:15:54) Messages From Janusworld(01:18:37) People Are Worried About AI Killing Everyone(01:22:40) The Lighter SideThe original text contained 1 footnote which was omitted from this narration. ---
First published:
October 17th, 2025
Source:
https://www.lesswrong.com/posts/gCuJ5DabY9oLNDs9B/ai-138-part-2-watch-out-for-documents
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
Well, one person says ‘demand,’ another says ‘give the thumbs up to’ or ‘welcome our new overlords.’ Why quibble? Surely we’re all making way too big a deal out of this idea of OpenAI ‘treating adults like adults.’ Everything will be fine. Right?
Why not focus on all the other cool stuff happening? Claude Haiku 4.5 and Veo 3.1? Walmart joining ChatGPT instant checkout? Hey, come back.
Alas, the mass of things once again got out of hand this week, so we’re splitting the update into two parts.
Table of Contents
Earlier This Week. OpenAI does paranoid lawfare, China escalates bigly.
Language Models Offer Mundane Utility. Help do your taxes, of course.
Language Models Don’t Offer Mundane Utility. Beware the false positive.
Huh, Upgrades. Claude Haiku 4.5, Walmart on ChatGPT instant checkout.
We Patched The Torment Nexus, Turn It [...] ---Outline:(00:51) Earlier This Week(02:25) Language Models Offer Mundane Utility(05:44) Language Models Don't Offer Mundane Utility(11:47) Huh, Upgrades(14:55) We Patched The Torment Nexus, Turn It Back On(27:49) On Your Marks(34:33) Choose Your Fighter(35:50) Deepfaketown and Botpocalypse Soon(36:15) Fun With Media Generation(40:32) Copyright Confrontation(41:04) AIs Are Often Absurd Sycophants(44:32) They Took Our Jobs(54:04) Find Out If You Are Worried About AI Killing Everyone(55:42) A Young Lady's Illustrated Primer(01:02:59) AI Diffusion Prospects(01:11:13) The Art of the Jailbreak(01:14:38) Get Involved(01:14:57) Introducing(01:16:48) In Other AI News(01:18:22) Show Me the Money(01:20:45) Quiet Speculations---
First published:
October 16th, 2025
Source:
https://www.lesswrong.com/posts/n4xxSKwwP3SYaRqBS/ai-138-part-1-the-people-demand-erotic-sycophants
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It is increasingly often strange compiling the monthly roundup, because life comes at us fast. I look at various things I’ve written, and it feels like they are from a different time. Remember that whole debate over free speech? Yeah, that was a few weeks ago. Many such cases. Gives one a chance to reflect.
In any case, here we go.
Table of Contents
Don’t Provide Bad Training Data.
Maybe Don’t Say Maybe.
Throwing Good Parties Means Throwing Parties.
Air Travel Gets Worse.
Bad News.
You Do Not Need To Constantly Acknowledge That There Is Bad News.
Prediction Market Madness.
No Reply Necessary.
While I Cannot Condone This.
Antisocial Media.
Government Working.
Tylenol Does Not Cause Autism.
Jones Act Watch.
For Science!.
Work Smart And Hard.
So Emotional.
[...] ---Outline:(00:37) Don't Provide Bad Training Data(02:25) Maybe Don't Say Maybe(03:00) Throwing Good Parties Means Throwing Parties(05:06) Air Travel Gets Worse(06:16) Bad News(09:03) You Do Not Need To Constantly Acknowledge That There Is Bad News(12:52) Prediction Market Madness(16:00) No Reply Necessary(19:49) While I Cannot Condone This(23:42) Antisocial Media(29:22) Government Working(47:12) Tylenol Does Not Cause Autism(51:12) Jones Act Watch(52:13) For Science!(54:29) Work Smart And Hard(55:34) So Emotional(59:17) Where Credit Is Due(01:01:30) Good News, Everyone(01:03:37) I Love New York(01:08:14) For Your Entertainment(01:17:14) Gamers Gonna Game Game Game Game Game(01:19:53) I Was Promised Flying Self-Driving Cars(01:24:33) Sports Go Sports(01:29:06) Opportunity Knocks---
First published:
October 15th, 2025
Source:
https://www.lesswrong.com/posts/XLbwizZeuhoHpdAvE/monthly-roundup-35-october-2025
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
What is going on with, and what should we do about, the Chinese declaring extraterritorial exports controls on rare earth metals, which threaten to go way beyond semiconductors and also beyond rare earths into things like lithium and also antitrust investigations?
China also took other actions well beyond only rare Earths, including going after Qualcomm, lithium and everything else that seemed like it might hurt, as if they are confident that a cornered Trump will fold and they believe they have escalation dominance and are willing to use it.
China now has issued reassurances that it will allow all civilian uses of rare earths and not to worry, but it seems obvious that America cannot accept a Chinese declaration of extraterritorial control over entire world supply chains, even if China swears it will only narrowly use that power. In response, Trump has threatened massive tariffs and cancelled [...] ---Outline:(01:20) Was This Provoked?(02:43) What Is China Doing?(03:37) How Is America Responding?(05:58) How Is China Responding To America's Response?(07:14) What To Make Of China's Attempted Reassurances?(08:15) How Should We Respond From Here?(09:37) It Looks Like China Overplayed Its Hand(10:36) We Need To Mitigate China's Leverage Across The Board(13:11) What About The Chip Export Controls?(14:01) This May Be A Sign Of Weakness(15:06) What Next?---
First published:
October 14th, 2025
Source:
https://www.lesswrong.com/posts/rTP5oZ3CDJR429Dw7/trade-escalation-supply-chain-vulnerabilities-and-rare-earth
---
Narrated by TYPE III AUDIO.
A little over a month ago, I documented how OpenAI had descended into paranoia and bad faith lobbying surrounding California's SB 53.
This included sending a deeply bad faith letter to Governor Newsom, which sadly is par for the course at this point.
It also included lawfare attacks against bill advocates, including Nathan Calvin and others, using Elon Musk's unrelated lawsuits and vendetta against OpenAI as a pretext, accusing them of being in cahoots with Elon Musk.
Previous reporting of this did not reflect well on OpenAI, but it sounded like the demand was limited in scope to a supposed link with Elon Musk or Meta CEO Mark Zuckerberg, links which very clearly never existed.
Accusing essentially everyone who has ever done anything OpenAI dislikes of having united in a hallucinated ‘vast conspiracy’ is all classic behavior for OpenAI's Chief Global Affairs Officer Chris Lehane [...] ---Outline:(02:35) What OpenAI Tried To Do To Nathan Calvin(07:22) It Doesn't Look Good(10:17) OpenAI's Jason Kwon Responds(19:14) A Brief Amateur Legal Analysis Of The Request(21:33) What OpenAI Tried To Do To Tyler Johnston(25:50) Nathan Compiles Responses to Kwon(29:52) The First Thing We Do(36:12) OpenAI Head of Mission Alignment Joshua Achiam Speaks Out(40:16) It Could Be Worse(41:31) Chris Lehane Is Who We Thought He Was(42:50) A Matter of Distrust---
First published:
October 13th, 2025
Source:
https://www.lesswrong.com/posts/txTKHL2dCqnC7QsEX/openai-15-more-on-openai-s-paranoid-lawfare-against
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
The 2025 State of AI Report is out, with lots of fun slides and a full video presentation. They’ve been consistently solid, providing a kind of outside general view.
I’m skipping over stuff my regular readers already know that doesn’t bear repeating.
Qwen The Fine Tune King For Now
Nathan Benaich: Once a “Llama rip-off,” @Alibaba_Qwen now powers 40% of all new fine-tunes on @huggingface. China's open-weights ecosystem has overtaken Meta's, with Llama riding off into the sunset…for now.
I highlight this because the ‘for now’ is important to understand, and to note that it's Qwen not DeepSeek. As in, models come and models go, and especially in the open model world people will switch on you on a dime. Stop worrying about lock-ins and mystical ‘tech stacks.’
Rise Of The Machines
Robots now reason too. “Chain-of-Action” planning brings structured thought to the [...] ---Outline:(00:39) Qwen The Fine Tune King For Now(01:19) Rise Of The Machines(01:40) Model Context Protocol Wins Out(02:06) Benchmarks Are Increasingly Not So Useful(02:56) The DeepSeek Moment Was An Overreaction(03:40) Capitalism Is Hard To Pin Down(04:30) The Odds Are Against Us And The Situation Is Grim(06:35) They Grade Last Year's Predictions(09:35) Their Predictions for 2026---
First published:
October 10th, 2025
Source:
https://www.lesswrong.com/posts/AvKjYYYHC93JzuFCM/2025-state-of-ai-report-and-predictions
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
OpenAI is making deals and shipping products. They locked in their $500 billion valuation and then got 10% of AMD in exchange for buying a ton of chips. They gave us the ability to ‘chat with apps’ inside of ChatGPT. They walked back their insane Sora copyright and account deletion policies and are buying $50 million in marketing. They’ve really got a lot going on right now.
Of course, everyone else also has a lot going on right now. It's AI. I spent the last weekend at a great AI conference at Lighthaven called The Curve.
The other big news that came out this morning is that China is asserting sweeping extraterritorial control over rare earth metals. This is likely China's biggest card short of full trade war or worse, and it is being played in a hugely escalatory way that America obviously can’t accept. Presumably this [...] ---Outline:(01:33) Language Models Offer Mundane Utility(03:39) Language Models Don't Offer Mundane Utility(05:13) Huh, Upgrades(08:21) Chat With Apps(11:32) On Your Marks(12:51) Choose Your Fighter(13:47) Fun With Media Generation(24:03) Deepfaketown and Botpocalypse Soon(27:46) You Drive Me Crazy(31:33) They Took Our Jobs(34:35) The Art of the Jailbreak(34:55) Get Involved(38:31) Introducing(42:18) In Other AI News(43:38) Get To Work(45:45) Show Me the Money(50:24) Quiet Speculations(52:15) The Quest for Sane Regulations(01:02:31) Chip City(01:06:53) The Race to Maximize Rope Market Share(01:08:49) The Week in Audio(01:09:28) Rhetorical Innovation(01:10:13) Paranoia Paranoia Everybody's Coming To Test Me(01:13:10) Aligning a Smarter Than Human Intelligence is Difficult(01:14:12) Free Petri Dish(01:15:45) Unhobbling The Unhobbling Department(01:17:46) Serious People Are Worried About Synthetic Bio Risks(01:19:09) Messages From Janusworld(01:28:54) People Are Worried About AI Killing Everyone(01:32:14) Other People Are Excited About AI Killing Everyone(01:40:48) So You've Decided To Become Evil(01:45:25) The Lighter Side---
First published:
October 9th, 2025
Source:
https://www.lesswrong.com/posts/YfvqLidW5BGpiFrNF/ai-137-an-openai-app-for-that
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.
It's been about a year since the last one of these. Given the long cycle, I have done my best to check for changes but things may have changed on any given topic by the time you read this.
The NEPA Problem
NEPA is a constant thorn in the side of anyone attempting to do anything.
A certain kind of person responds with: “Good.”
That kind of person does not want humans to do physical things in the world.
They like the world as it is, or as it used to be.
They do not want humans messing with it further.
They often also think humans are bad, and should stop existing entirely.
Or believe humans deserve to suffer or do penance.
Or do not trust people to make good decisions and safeguard what matters.
To them [...] ---Outline:(00:21) The NEPA Problem(01:53) The Full NEPA Solution(02:43) The Other Full NEPA Solution(03:59) Meanwhile(06:06) Yay Nuclear Power(14:22) Yay Solar and Wind Power(18:11) Yay Grid Storage(18:42) Yay Transmission Lines(19:40) American Energy is Cheap(20:48) Geoengineering(22:29) NEPA Standard Procedure is a Doom Loop(25:34) Categorical Exemptions(26:13) Teachers Against Transportation(32:41) Also CEQA(36:36) It Can Always Be Worse(39:35) How We Got Into This Mess(41:00) Categorical Exclusions(47:23) A Green Bargain(54:18) Costs Versus Benefits(55:27) A Call for Proposals(56:18) The Men of Two Studies(59:55) A Modest Proposal---
First published:
October 8th, 2025
Source:
https://www.lesswrong.com/posts/HqAJyxhdJcEfhH2nW/nepa-permitting-and-energy-roundup-2
---
Narrated by TYPE III AUDIO.
---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.



