DiscoverDwarkesh PodcastDario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress
Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Update: 2023-08-08
Share

Description

Here is my conversation with Dario Amodei, CEO of Anthropic.

Dario is hilarious and has fascinating takes on what these models are doing, why they scale so well, and what it will take to align them.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

Timestamps

(00:00:00 ) - Introduction

(00:01:00 ) - Scaling

(00:15:46 ) - Language

(00:22:58 ) - Economic Usefulness

(00:38:05 ) - Bioterrorism

(00:43:35 ) - Cybersecurity

(00:47:19 ) - Alignment & mechanistic interpretability

(00:57:43 ) - Does alignment research require scale?

(01:05:30 ) - Misuse vs misalignment

(01:09:06 ) - What if AI goes well?

(01:11:05 ) - China

(01:15:11 ) - How to think about alignment

(01:31:31 ) - Is modern security good enough?

(01:36:09 ) - Inefficiencies in training

(01:45:53 ) - Anthropic’s Long Term Benefit Trust

(01:51:18 ) - Is Claude conscious?

(01:56:14 ) - Keeping a low profile



Get full access to Dwarkesh Podcast at www.dwarkeshpatel.com/subscribe
Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Dario Amodei (Anthropic CEO) - Scaling, Alignment, & AI Progress

Dwarkesh Patel