#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
Update: 2025-03-02
Description
A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!
We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!
Comments
In Channel