DiscoverThe TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671
Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Update: 2024-02-121
Share

Description

Today we’re joined by Sanmi Koyejo, assistant professor at Stanford University, to continue our NeurIPS 2024 series. In our conversation, Sanmi discusses his two recent award-winning papers. First, we dive into his paper, “Are Emergent Abilities of Large Language Models a Mirage?”. We discuss the different ways LLMs are evaluated and the excitement surrounding their“emergent abilities” such as the ability to perform arithmetic Sanmi describes how evaluating model performance using nonlinear metrics can lead to the illusion that the model is rapidly gaining new capabilities, whereas linear metrics show smooth improvement as expected, casting doubt on the significance of emergence. We continue on to his next paper, “DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models,” discussing the methodology it describes for evaluating concerns such as the toxicity, privacy, fairness, and robustness of LLMs.


The complete show notes for this episode can be found at twimlai.com/go/671.

Comments 
In Channel
loading
Download from Google Play
Download from App Store
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Are Emergent Behaviors in LLMs an Illusion? with Sanmi Koyejo - #671

Sam Charrington