Learning Transformer Programs with Dan Friedman - #667

Learning Transformer Programs with Dan Friedman - #667

Update: 2024-01-15
Share

Description

Today, we continue our NeurIPS series with Dan Friedman, a PhD student in the Princeton NLP group. In our conversation, we explore his research on mechanistic interpretability for transformer models, specifically his paper, Learning Transformer Programs. The LTP paper proposes modifications to the transformer architecture which allow transformer models to be easily converted into human-readable programs, making them inherently interpretable. In our conversation, we compare the approach proposed by this research with prior approaches to understanding the models and their shortcomings. We also dig into the approach’s function and scale limitations and constraints.


The complete show notes for this episode can be found at twimlai.com/go/667.

Comments 
In Channel
loading
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Learning Transformer Programs with Dan Friedman - #667

Learning Transformer Programs with Dan Friedman - #667

Sam Charrington