DiscoverTraining Data
Training Data
Claim Ownership

Training Data

Author: Sequoia Capital

Subscribed: 152Played: 2,222
Share

Description

Join us as we train our neural nets on the theme of the century: AI. Sonya Huang, Pat Grady and more Sequoia Capital partners host conversations with leading AI builders and researchers to ask critical questions and develop a deeper understanding of the evolving technologies—and their implications for technology, business and society.


The content of this podcast does not constitute investment advice, an offer to provide investment advisory services, or an offer to sell or solicitation of an offer to buy an interest in any investment fund.

48 Episodes
Reverse
CEO Amit Bendov shares how Gong evolved from a meeting transcription tool to an AI-powered revenue platform that's increasing sales capacity by up to 60%. He explains why task-specific AI agents are the key to enterprise adoption, and why human accountability will remain crucial even as AI takes over routine sales tasks. Amit also reveals how Gong survived recent market headwinds by expanding their product suite while maintaining their customer-first approach. Hosted by Sonya Huang and Pat Grady, Sequoia Capital Mentioned in this episode:  “New paradigm of AI architectures”: Yann LeCun’s talk at Davos where he talks about the world beyond transformers and LLMs in the next 3-5 years. The Beginning of Infinity: Book by David Deutsch that Amit says is “mind-changing.”
At AI Ascent 2025, Jeff Dean makes bold predictions. Discover how the pioneer behind Google's TPUs and foundational AI research sees the technology evolving, from specialized hardware to more organic systems, and future engineering capabilities.
Recorded live at Sequoia’s AI Ascent 2025: LangChain CEO Harrison Chase introduces the concept of ambient agents, AI systems that operate continuously in the background responding to events rather than direct human prompts. Learn how these agents differ from traditional chatbots, why human oversight remains essential and how this approach could dramatically scale our ability to leverage AI.
Recorded live at Sequoia’s AI Ascent 2025: Sierra co-founder Bret Taylor discusses why AI is driving a fundamental shift from subscription-based pricing to outcomes-based models. Learn why this transition is harder for incumbents than startups, why applied AI and vertical specialization represent the biggest opportunities for entrepreneurs and how to position your AI company for success in this new paradigm.
Recorded live at Sequoia’s AI Ascent 2025: Sam reflects on OpenAI’s evolution from a 14-person research lab to a dominant AI platform. He envisions transforming ChatGPT into a deeply personal AI service that remembers your entire life's context—from conversations to emails—while working seamlessly across all services. Sam describes the generation gap in how users engage with ChatGPT, and makes surprisingly specific predictions for the next 2-3 years of AI evolution.
Workday CEO and Sequoia partner Carl Eschenbach explains how the company is evolving its platform to handle both human and AI workers. He shares Workday’s three-pronged approach to monetizing AI through seat-based pricing, role-based agents and consumption-based API access. Eschenbach discusses why domain-specific agents with curated data will be more valuable than general-purpose models in the enterprise, and how Workday is helping enterprises navigate the transition to an AI-powered workplace while maintaining human connection. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
After pioneering reinforcement learning breakthroughs at DeepMind with Capture the Flag and AlphaStar, Max Jaderberg aims to revolutionize drug discovery with AI as Chief AI Officer of Isomorphic Labs, which was spun out of DeepMind. He discusses how AlphaFold 3's diffusion-based architecture enables unprecedented understanding of molecular interactions, and why we're approaching a "Move 37 moment" in AI-powered drug design where models will surpass human intuition. Max shares his vision for general AI models that can solve all diseases, and the importance of developing agents that can learn to search through the whole potential design space. Hosted by Stephanie Zhan, Sequoia capital Mentioned in this episode:  Playing Atari with Deep Reinforcement Learning: Seminal 2013 paper on Reinforcement Learning  Capture the Flag: 2019 DeepMind paper on the emergence of cooperative agents AlphaStar: 2019 DeepMind paper on attaining grandmaster level in StarCraft II using multi-agent RL AlphaFold Server: Web interface for AlphaFold 3 model for non-commercial academic use
Former Outreach CEO Manny Medina discusses his new company Paid, which provides billing, pricing and margin management tools for AI companies. He explains why traditional SaaS pricing models don’t work for AI businesses, and breaks down emerging approaches like outcome-based and agent-based pricing. Manny shares why he believes focused AI applications targeting specific workflows will win over broad platforms, while emphasizing that AI companies need better tools to understand their unit economics and capture more value. Hosted by Pat Grady and Lauren Reeder, Sequoia Capital Mentioned in this episode: CPQ: Configure, Price, Quote Invent and Wander: Book by Jeff Bezos and Walter Isaacson Foundations of Statistical Natural Language Processing: 1999 book by Chris Manning and Hinrich Schütze that Manny cites as a piece of AI content every AI founder should read. (still in print, companion site here) The fox and the hedgehog:  Quandri  XBOW HappyRobot  Owl Crosby
Patrick Hsu, co-founder of Arc Institute, discusses the opportunities for AI in biology beyond just drug development, and how Evo 2, their new biology foundation model, is enabling a broad ecosystem of applications. Evo 2 was trained on a vast dataset of genomic data to learn evolutionary patterns that would have taken years to find; as a result, the model can be used for applications from identifying mutations that cause disease to designing new molecular and even genome scale biological systems. Hosted by Josephine Chen and Pat Grady, Sequoia Capital Mentioned in this episode: Sequence modeling and design from molecular to genome scale with Evo: Public pre-print of original Evo paper Genome modeling and design across all domains of life with Evo 2: Public pre-print of Evo 2 paper ClinVar: NIH database of the genes that are known to cause disease, and mutations in those genes causally associated with disease state Sequence Read Archive: Massive NIH database of gene sequencing data  Machines of Loving Grace: Daria Amodei essay that Patrick cites on how AI could transform the world for the better Arc Virtual Cell Atlas: Arc’s first step toward assembling, curating and generating large-scale cellular data from AI-driven biological discovery (among many other tools) Protein Data Bank (PDB): a global archive of 3D structural information of biomolecules used by DeepMind to train AlphaFold OpenAI Deep Research: The one AI app Patrick uses daily
Amjad Masad set out more than a decade ago to pursue the dream of unleashing 1B software creators around the world. With millions of Replit users pre-ChatGPT, that vision was already becoming a reality. Turbocharged by LLMs, the vision of enabling anyone to code—from 12-year-olds in India to knowledge workers in the U.S.—seems less and less radical. In this episode, Amjad explains how an explosion in the developer population could change the economy, society and more. He also discusses his early days programming in Jordan, his unique management approach and what AI will mean for the global economy. Hosted by David Cahn and Sonya Huang, Sequoia Capital  Mentioned in this episode: On the Naturalness of Software: 2012 paper on applying NLP to code  Attention Is All You Need: Seminal 2017 paper on transformers I Am a Strange Loop: 2007 follow up to Douglas Hofstadter’s 1979 classic Gödel, Escher, Bach that explores how self-referential systems can describe minds On Lisp: Paul Graham’s 1993 book on the original programming language of AI
Christopher O’Donnell believes the fundamental problems with CRM—incomplete data, complex workflows, siloed work products and the fear of leads falling through the cracks—can finally be solved through AI. Founder of Day.ai and former Chief Product Officer of HubSpot, Christopher explains how his team is building a system that automatically captures the full context of customer relationships while giving users transparency and control. He shares lessons from building HubSpot’s CRM and why he’s taking a deliberate approach to product development despite the pressure to scale quickly in the AI era. Hosted by Pat Grady, Sequoia Capital  Mentioned in this episode: The Innovator's Dilemma: Classic book by Clay Christensen (referenced regarding HubSpot's second S-curve strategy) Hubspot CRM: The only product to successfully challenge Salesforce’s dominance in the CRM category From Super Mario Brothers to Elden Ring: Analogy to what an AI-powered CRM experience can be through comparison of video games launched in 1985 vs 2022 Punk’d: Hidden camera–practical joke reality television series that premiered on MTV in 2003, created by Ashton Kutcher and Jason Goldberg Slow is smooth and smooth is fast: SEALs-derived concept mentioned regarding product development) Aga stove (highlighted as extraordinary product design example)
Filip Kozera sees parallels between Excel’s democratization of data analytics and Wordware’s mission to put AI development in the hands of knowledge workers. Drawing inspiration from Excel’s 750 million users (compared to 30 million software developers), Wordware is creating tools that balance the rigid structure of programming with the fuzziness of natural language. Filip explains why effective AI development requires working across multiple abstraction layers—from high-level concepts to detailed implementation—while preserving human creative control. He shares his vision for “word artisans” who will use AI to amplify their creative impact. Hosted by Sonya Huang, Sequoia Capital Mentioned in this episode: Lovable: Generative AI app that builds UIs and web apps Her: 2013 Spike Jonze film that Filip uses as an example of how voice will not be the best modality to express knowledge work. Descript: AI video editing app that Filip uses a lot.  Granola: AI notetaking app Filip uses every day..  Gemini 2.0 Pro: Google’s newest long context model that can handle 6000 page pdfs. Limitless pendant: Wearable device for collecting personal conversational context to drive AI experiences that Filip can’t wait for to ship. DeepLearning.AI: Andrew Ng’s amazing resource for learning about AI 3Blue1Brown: Grant Sanderson’s incredible channel on YouTube that explains math and AI visually.
As VP of Google Labs, Josh Woodward leads teams exploring the frontiers of AI applications. He shares insights on their rapid development process, why today’s written prompts will become outdated and how AI is transforming everything from video generation to computer control. He reveals that 25% of Google’s code is now written by AI and explains why coding could see major leaps forward this year. He emphasizes the importance of taste, design and human values in building AI tools that will shape how future generations work and create. Mentioned in this episode: Notebook LM: Personal research product based on Gemini 2 (previously discussed on Training Data.) Veo 2: Google DeepMind’s new video generation model. Paul Graham on X replying to Aaron Levie’s post that “One approach to take in building in AI is to do something that's too expensive to be reasonably practical right now, and just bet that the costs will drop by 10X or 100X over time. The cost curve is on your side.” Where Good Ideas Come From: Book on the history of innovation by Steven Johnson. Project Mariner: Google DeepMind’s research prototype exploring human-agent interaction starting with browser use. Replit Agent: Josh’s favorite new AI app The Lego Story: Book on the history of Lego. Hosted by: Ravi Gupta and Sonya Huang, Sequoia Capital
Harvey CEO Winston Weinberg explains why success in legal AI requires more than just model capabilities—it demands deep process expertise that doesn’t exist online. He shares how Harvey balances rapid product development with earning trust from law firms through hyper-personalized demos and deep industry expertise. The discussion covers Harvey’s approach to product development—expanding specialized capabilities then collapsing them into unified workflows—and why focusing on complex work like international mergers creates the most defensible position in legal AI. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital
OpenEvidence is transforming how doctors access medical knowledge at the point of care, from the biggest medical establishments to small practices serving rural communities. Founder Daniel Nadler explains his team’s insight that training smaller, specialized AI models on peer-reviewed literature outperforms large general models for medical applications. He discusses how making the platform freely available to all physicians led to widespread organic adoption and strategic partnerships with publishers like the New England Journal of Medicine. In an industry where organizations move glacially, 10-20% of all U.S. doctors began using OpenEvidence overnight to find information buried deep in the long tail of new medical studies, to validate edge cases and improve diagnoses. Nadler emphasizes the importance of accuracy and transparency in AI healthcare applications. Hosted by: Pat Grady, Sequoia Capital  Mentioned in this episode:  Do We Still Need Clinical Language Models?: Paper from OpenEvidence founders showing that small, specialized models outperformed large models for healthcare diagnostics Chinchilla paper: Seminal 2022 paper about scaling laws in large language models Understand: Ted Chiang sci-fi novella published in 1991
OpenAI’s Isa Fulford and Josh Tobin discuss how the company’s newest agent, Deep Research, represents a breakthrough in AI research capabilities by training models end-to-end rather than using hand-coded operational graphs. The product leads explain how high-quality training data and the o3 model’s reasoning abilities enable adaptable research strategies, and why OpenAI thinks Deep Research will capture a meaningful percentage of knowledge work. Key product decisions that build transparency and trust include citations and clarification flows. By compressing hours of work into minutes, Deep Research transforms what’s possible for many business and consumer use cases. Hosted by: Sonya Huang and Lauren Reeder, Sequoia Capital  Mentioned in this episode: Yann Lecun’s Cake: An analogy Meta AI’s leader shared in his 2016 NIPS keynote
Palo Alto Networks’s CEO Nikesh Arora dispels DeepSeek hype by detailing all of the guardrails enterprises need to have in place to give AI agents “arms and legs.” No matter the model, deploying applications for precision-use cases means superimposing better controls. Arora emphasizes that the real challenge isn’t just blocking threats but matching the accelerated pace of AI-powered attacks, requiring a fundamental shift from prevention-focused to real-time detection and response systems. CISOs are risk managers, but legacy companies competing with more risk-tolerant startups need to move quickly and embrace change.  Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Cortex XSIAM: Security operations and incident remediation platform from Palo Alto Networks
MongoDB product leader Sahir Azam explains how vector databases have evolved from semantic search to become the essential memory and state layer for AI applications. He describes his view of how AI is transforming software development generally, and how combining vectors, graphs and traditional data structures enables high-quality retrieval needed for mission-critical enterprise AI use cases. Drawing from MongoDB's successful cloud transformation, Azam shares his vision for democratizing AI development by making sophisticated capabilities accessible to mainstream developers through integrated tools and abstractions. Hosted by: Sonya Huang and Pat Grady, Sequoia Capital  Mentioned in this episode: Introducing ambient agents: Blog post by Langchain on a new UX pattern where AI agents can listen to an event stream and act on it  Google Gemini Deep Research: Sahir enjoys its amazing product experience Perplexity: AI search app that Sahir admires for its product craft Snipd: AI powered podcast app Sahir likes
Stef Corazza leads generative AI development at Roblox after previously building Adobe’s 3D and AR platforms. His technical expertise, combined with Roblox’s unique relationship with its users, has led to the infusion of AI into its creation tools. Roblox has assembled the world’s largest multimodal dataset. Stef previews the Roblox Assistant and the company’s new 3D foundation model, while emphasizing the importance of maintaining positive experiences and civility on the platform.  Mentioned in this episode: Driving Empire: A Roblox car racing game Stef particularly enjoys RDC: Roblox Developer Conference Ego.live: Roblox app to create and share synthetic worlds populated with human-like generative agents and simulated communities| PINNs: Physics Informed Neural Networks ControlNet: A model for controlling image diffusion by conditioning on an additional input image that Stef says can be used as a 2.5D approach to 3D generation. Neural rendering: A combination of deep learning with computer graphics principles developed by Nvidia in its RTX platform Hosted by: Konstantine Buhler and Sonya Huang, Sequoia Capital
Ioannis Antonoglou, founding engineer at DeepMind and co-founder of ReflectionAI, has seen the triumphs of reinforcement learning firsthand. From AlphaGo to AlphaZero and MuZero, Ioannis has built the most powerful agents in the world. Ioannis breaks down key moments in AlphaGo's game against Lee Sodol (Moves 37 and 78), the importance of self-play and the impact of scale, reliability, planning and in-context learning as core factors that will unlock the next level of progress in AI. Hosted by: Stephanie Zhan and Sonya Huang, Sequoia Capital Mentioned in this episode: PPO: Proximal Policy Optimization algorithm developed by DeepMind in game environments. Also used by OpenAI for RLHF in ChatGPT. MuJoCo: Open source physics engine used to develop PPO Monte Carlo Tree Search: Heuristic search algorithm used in AlphaGo as well as video compression for YouTube and the self-driving system at Tesla AlphaZero: The DeepMind model that taught itself from scratch how to master the games of chess, shogi and Go MuZero: The DeepMind follow up to AlphaZero that mastered games without knowing the rules and able to plan winning strategies in unknown environments AlphaChem: Chemical Synthesis Planning with Tree Search and Deep Neural Network Policies DQN: Deep Q-Network, Introduced in 2013 paper, Playing Atari with Deep Reinforcement Learning AlphaFold: DeepMind model for predicting protein structures for which Demis Hassabis, John Jumper and David Baker won the 2024 Nobel Prize in Chemistry
loading
Comments