DiscoverConsistently Candid#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more
#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

Update: 2025-03-02
Share

Description

A lot has happened in AI since the last time I spoke to Nathan Labenz of The Cognitive Revolution, so I invited him back on for a whistlestop tour of the most important developments we've seen over the last year!

We covered reasoning models, DeepSeek, the many spooky alignment failures we've observed in the last few months & much more!

Follow Nathan on Twitter

Listen to The Cognitive Revolution 

My Twitter & Substack 

Comments 
In Channel
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more

#18 Nathan Labenz on reinforcement learning, reasoning models, emergent misalignment & more