Discover
High Signal: Data Science | Career | AI
High Signal: Data Science | Career | AI
Author: Delphina
Subscribed: 26Played: 615Subscribe
Share
© 2025 Delphina
Description
Welcome to High Signal, the podcast for data science, AI, and machine learning professionals.
High Signal brings you the best from the best in data science, machine learning, and AI. Hosted by Hugo Bowne-Anderson and produced by Delphina, each episode features deep conversations with leading experts, such as Michael Jordan (UC Berkeley), Andrew Gelman (Columbia) and Chiara Farranato (HBS).
Join us for practical insights from the best to help you advance your career and make an impact in these rapidly evolving fields.
More on our website: https://high-signal.delphina.ai/
High Signal brings you the best from the best in data science, machine learning, and AI. Hosted by Hugo Bowne-Anderson and produced by Delphina, each episode features deep conversations with leading experts, such as Michael Jordan (UC Berkeley), Andrew Gelman (Columbia) and Chiara Farranato (HBS).
Join us for practical insights from the best to help you advance your career and make an impact in these rapidly evolving fields.
More on our website: https://high-signal.delphina.ai/
29 Episodes
Reverse
Liz Costa of the Behavioral Insights Team returns to High Signal to deliver a critical behavioral science playbook for the AI era focused on human and business impact. We discuss why the potential of AI can only be fulfilled by understanding a single bottleneck: human behavior. The conversation reveals why leaders must intervene now to prevent temporary adoption patterns from calcifying into permanent organizational norms, the QWERTY Effect, and how to move organizations past simply automating drudgery to achieving deep integration.
We dig into why AI adoption is fundamentally a behavioral challenge, providing a diagnostic framework for leaders to identify stalled progress using the Motivation-Capability-Trust triad. Liz explains how to reframe AI deployment by leveraging Loss Aversion to bypass employee skepticism, and how to design workflows that improve human reasoning rather than replace it. The conversation provides clear guidance on intentional task offloading, the power of using AI to stress-test decisions, and why sanctioning employee experimentation is essential to discovering high-value use cases.
LINKS
AI & Human Behaviour: Augment, Adopt, Align, Adapt (https://www.bi.team/publications/ai-and-human-behaviour/)
Thinking Fast and Slow in AI (https://sites.google.com/view/sofai/home)
How does LLM use affect decision-making? (https://www.bi.team/wp-content/uploads/2025/09/How-can-LLMs-reduce-our-own-biases-Analysis-Report.pdf)
Defaults, Decisions, and Dynamic Systems: Behavioral Science Meets AI with Lis Costa (High Signal) (https://high-signal.delphina.ai/episode/defaults-decisions-and-dynamic-systems-behavioral-science-meets-ai)
The Behavioral Insights Team (https://www.bi.team/)
Lis Costa on LinkedIn (https://uk.linkedin.com/in/elisabeth-costa-6a5b35248)
High Signal podcast (https://high-signal.delphina.ai/)
Watch the podcast episode on YouTube (https://youtu.be/dXId0BbcsSE)
Delphina's Newsletter (https://delphinaai.substack.com/)
Lance Martin of LangChain joins High Signal to outline a new playbook for engineering in the AI era, where the ground is constantly shifting under the feet of builders. He explains how the exponential improvement of foundation models is forcing a complete rethink of how software is built, revealing why top products from Claude Code to Manus are in a constant state of re-architecture simply to keep up.
We dig into why the old rules of ML engineering no longer apply, and how Rich Sutton's "bitter lesson" dictates that simple, adaptable systems are the only ones that will survive. The conversation provides a clear framework for leaders on the critical new disciplines of context engineering to manage cost and reliability, the architectural power of the "agent harness" to expand capabilities without adding complexity, and why the most effective evaluation of these new systems is shifting away from static benchmarks and towards a dynamic model of in-app user feedback.
LINKS
Lance on LinkedIn (https://www.linkedin.com/in/lance-martin-64a33b5/)
Context Engineering for Agents by Lance Martin (https://rlancemartin.github.io/2025/06/23/context_engineering/)
Learning the Bitter Lesson by Lance Martin (https://rlancemartin.github.io/2025/07/30/bitter_lesson/)
Context Engineering in Manus by Lance Martin (https://rlancemartin.github.io/2025/10/15/manus/)
Context Rot: How Increasing Input Tokens Impacts LLM Performance by Chroma (https://research.trychroma.com/context-rot)
Building effective agents by Erik Schluntz and Barry Zhang at Anthropic (https://www.anthropic.com/engineering/building-effective-agents)
Effective context engineering for AI agents by Anthropic (https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents)
How we built our multi-agent research system by Anthropic (https://www.anthropic.com/engineering/multi-agent-research-system)
Measuring AI Ability to Complete Long Tasks by METR (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/)
Your AI Product Needs Evals by Hamel Husain (https://hamel.dev/blog/posts/evals/index.html)
Introducing Roast: Structured AI workflows made easy by Shopify (https://shopify.engineering/introducing-roast)
Watch the podcast episode on YouTube (https://youtu.be/2Muxy3wE-E0)
Delphina's Newsletter (https://delphinaai.substack.com/)
Paras Doshi (Head of Data, Opendoor; former data leader at Amazon) joins High Signal to unpack the playbook for building an indispensable data function. He shares his experience tackling the classic scaling challenge of fragmented data at Opendoor, where rapid growth led to inconsistent metrics across the business, and turning the data function into a centralized strategic asset.
We dive deep into how to earn a true seat at the table, why he believes AI is creating the "100x individual contributor," and how the principles of agency, autonomy, and adaptability are the new essentials for data careers. The conversation also explores the pragmatic divide between batch and real-time ML, how to identify a truly data-led company, and why leaders must shield their top talent to unlock disproportionate impact.
LINKS
Paras Doshi on LinkedIn (https://www.linkedin.com/in/doshiparas/)
Insight Extractor, Paras' blog on analytics, data science, and business intelligence (https://insightextractor.com/)
Watch the conversation on YouTube (https://youtu.be/DDSKxL_JeLc)
Delphina's Newsletter (https://delphinaai.substack.com/)
Vishnu Ram Venkataraman (Generative AI Executive & Entrepreneur; former AI Leader at Credit Karma and Intuit) joins High Signal to unpack the true cost of generative AI. Having scaled AI solutions impacting over 140 million users, Vishnu reveals why the ease of shipping Gen AI prototypes often masks significant operational and engineering debts, challenging the conventional wisdom of rapid deployment.
We dive deep into the strategic shift from traditional ML to Gen AI, discussing why the shelf value of code is dramatically falling, how to design new organizational triads for continuous iteration, and the critical differences in testing probabilistic AI systems. The conversation also explores how to manage risk with sensitive data, the power of synthetic data in early development, and which mature ML practices remain indispensable in the new AI era.
LINKS
Vishnu on LinkedIn (https://www.linkedin.com/in/vishnuvram/)
Fei-Fei Li on Generative AI as a Civilizational Technology (https://high-signal.delphina.ai/episode/fei-fei-on-how-human-centered-ai-actually-gets-built)
Tim O'Reilly on The End of Programming As We Know It (https://high-signal.delphina.ai/episode/tim-oreilly-on-the-end-of-programming-as-we-know-it)
Watch the conversation on YouTube (https://youtu.be/vDQdCl_EOKg)
Delphina's Newsletter (https://delphinaai.substack.com/)
Sergey Fogelson (VP of Data Science, Televisa Univision) joins High Signal to reveal how the world’s largest Spanish-language media company built a sophisticated data engine from the ground up. This transformation fueled a tenfold expansion of its digital streaming business by redefining how the company connects with 300 million viewers worldwide. At the heart of this success is a proprietary household graph that creates a single, privacy-first view of a massive and culturally diverse audience.
We dig into the journey from basic data unification to building production-ready recommendation engines, how his team uses embeddings on user behavior to uncover surprising connections in content consumption, and the trade-offs between investing in internal data tools versus direct revenue-driving products. The conversation also explores a pragmatic framework for AI adoption, showing how foundational machine learning often outperforms chasing the latest trends and where LLMs can deliver real, measurable value.
LINKS
Sergey Fogelson on LinkedIn (https://www.linkedin.com/in/sergeyfogelson/)
Watch the conversation on YouTube (https://youtu.be/f9R8mGcwygU)
Delphina's Newsletter (https://delphinaai.substack.com/)
Andrés Bucchi (Chief Data Officer, LATAM Airlines) joins High Signal to unpack how a century-old airline reinvented itself with data and AI—and how that transformation is unlocking value from fuel efficiency to fraud detection. LATAM has built a massive data operation, experimenting across everything from pricing to operations, while customers benefit from a more reliable and secure travel experience.
We dig into how LATAM fostered an experimentation culture, why existing data infrastructure is a critical asset, and how the biggest bottleneck in AI adoption isn't the technology itself, but human decision-making. The conversation also looks ahead to the future of generative AI as a software engineering problem, and the organizational changes needed to unlock its full potential.
LINKS
Andrés Bucchi on LinkedIn (https://www.linkedin.com/in/bucchi/)
Tim O'Reilly on The End of Programming As We Know It, High Signal (https://high-signal.delphina.ai/episode/tim-oreilly-on-the-end-of-programming-as-we-know-it)
Watch the conversation on YouTube (https://youtu.be/U_eaOmt-Rw4)
Delphina's Newsletter (https://delphinaai.substack.com/)
Anu Bharadwaj (President, Atlassian) joins High Signal to unpack how humans and AI agents will work together across the enterprise, and how that shift could change the very nature of teamwork. Atlassian employees have already built thousands of agents across product, marketing, engineering, and HR teams, while customers like HarperCollins are cutting manual work by 4x as industries from publishing to finance rethink their workflows.
We dig into how Atlassian’s culture enables bottom-up experimentation, why grounding and reliability are critical for adoption, and how non-technical teams are often the ones creating the most useful agents. The conversation also looks ahead to the frontiers of multiplayer agent collaboration, proactive and ambient workflows, and the governance and compliance challenges enterprises will face as agents move from tools to teammates.
LINKS
Anu on LinkedIn (https://www.linkedin.com/in/anutthara/)
Building effective agents by Erik Schluntz and Barry Zhang at Anthropic (https://www.anthropic.com/engineering/building-effective-agents)
How we built our multi-agent research system by Anthropic (https://www.anthropic.com/engineering/multi-agent-research-system)
Watch the podcast episode on YouTube (https://youtu.be/898M86sKIi8?si=YGoekFzVJ0UH6pCJ)
Delphina's Newsletter (https://delphinaai.substack.com/)
Tomasz Tunguz (Theory Ventures) joins High Signal to unpack why a trillion dollars of market cap is up for grabs as AI reshapes enterprise software. He explains why workflows are now changing faster than packaged software can keep up, how “liquid software” is redefining CRM and marketing automation, and why background agents will require a new kind of “agent inbox.” We discuss the compounding errors that arise when tools are chained too finely, the hidden AI technical debt accumulating in today’s systems, and why modular stacks—mixing local and cloud models—will beat monolithic apps. The conversation also surfaces early memory architectures, what breaks when one IC manages 100 agents, and how these shifts change the real bottlenecks in scaling AI.
LINKS
Tomasz' Website (check out his blog!) (https://tomtunguz.com/)
Tomasz on LinkedIn (https://www.linkedin.com/in/tomasztunguz/)
Building effective agents by Erik Schluntz and Barry Zhang at Anthropic (https://www.anthropic.com/engineering/building-effective-agents)
How we built our multi-agent research system by Anthropic (https://www.anthropic.com/engineering/multi-agent-research-system)
Tim O'Reilly on The End of Programming As We Know It (https://high-signal.delphina.ai/episode/tim-oreilly-on-the-end-of-programming-as-we-know-it)
Delphina's Newsletter (https://delphinaai.substack.com/)
Amy Edmondson (Harvard Business School) and Mike Luca (Johns Hopkins) join High Signal to unpack what actually drives good decisions in data‑rich organizations. Using contrasts like the Bay of Pigs vs. the Cuban Missile Crisis and product cases such as Airbnb’s work on measuring discrimination, they show how decision quality tracks conversation quality—framing options, surfacing uncertainty, and challenging assumptions. We cover common failure modes (correlation vs. causation, anchoring, hierarchy, false precision), practical meeting designs that raise the signal, and where algorithms and LLMs help or hinder human judgment.
LINKS
Amy on LinkedIn (https://www.linkedin.com/in/amycedmondson/)
Mike on LinkedIn (https://www.linkedin.com/in/profluca/)
Where Data-Driven Decision-Making Can Go Wrong: Five pitfalls to avoid by Michael Luca and Amy C. Edmondson (https://hbr.org/2024/09/where-data-driven-decision-making-can-go-wrong)
Many Analysts, One Data Set: Making Transparent How Variations in Analytic Choices Affect Results (https://journals.sagepub.com/doi/10.1177/2515245917747646)
Trillion Dollar Coach by Eric Schmidt, Jonathan Rosenberg, and Alan Eagle (https://www.trilliondollarcoach.com/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Daragh Sibley, Chief Algorithms Officer at Literati and former Director of Data Science at Stitch Fix, joins High Signal to unpack how machine-learning moves from slide-deck promise to bottom-line impact. He walks through his shift from academic research on how kids learn to read to owning inventory and personalization algorithms that decide which five books land in every child’s box. We dig into the moment a data leader stops advising and starts owning P&L-critical calls, why some problems deserve simple analytics while others need high-dimensional models, and how to design workflows where human judgment and algorithmic predictions share accountability. Along the way we talk incentive design, balancing exploration and exploitation in inventory, and measuring success in dollars—not dashboards.
LINKS
Daragh on LinkedIn (https://www.linkedin.com/in/daragh-sibley-2111835/)
Eric Colson on Why 90% of Data Science Fails—And How to Fix It (https://high-signal.delphina.ai/episode/why-90-of-data-science-fails-and-how-to-fix-it-eric-colson)
Sudarshan Seshadri on High-Stakes AI Systems and the Cost of Getting It Wrong (https://high-signal.delphina.ai/episode/high-stakes-ai-systems-and-the-cost-of-getting-it-wrong)
Delphina's Newsletter (https://delphinaai.substack.com/)
Lis Costa, Chief of Innovation and Partnerships at the Behavioural Insights Team, joins High Signal to explore how behavioral science is reshaping public policy, digital platforms, and machine learning.
She explains how defaults influence behavior at scale, why personalization and chatbots are unlocking new kinds of interventions, and what happens when AI systems meet real-world complexity. We also discuss the limits of nudging, the promise of boosting, and why building for human decision-making requires more than just good models.
We dig into why AI adoption is fundamentally a behavioral challenge, providing a diagnostic framework for leaders to identify stalled progress using the Motivation-Capability-Trust triad. Lis explains how to reframe AI deployment by leveraging Loss Aversion to bypass employee skepticism, and how to design workflows that improve human reasoning rather than replace it. The conversation provides clear guidance on intentional task offloading, the power of using AI to stress-test decisions, and why sanctioning employee experimentation is essential to discovering high-value use cases.
LINKS
The Behavioral Insights Team (https://www.bi.team/)
Lis Costa on LinkedIn (https://uk.linkedin.com/in/elisabeth-costa-6a5b35248)
High Signal podcast (https://high-signal.delphina.ai/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Sudarshan Seshadri—VP of AI, Data Science, and Foundations Engineering at Alto Pharmacy—joins us to explore what it takes to build high-stakes AI systems that people can actually trust. He shares lessons from deploying machine learning and LLMs in healthcare, where speed, safety, and uncertainty must be carefully balanced. We talk about designing AI to support pharmacist judgment, the shift from bottlenecks to decision backbones, and why great data leaders are really architects of how irreversible decisions get made.
LINKS
Suddu on LinkedIn (https://www.linkedin.com/in/ss01/)
Careers at Alto Pharmacy (https://www.alto.com/careers)
High Signal podcast (https://high-signal.delphina.ai/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Roberto Medri, VP of Data Science at Instagram, explains why most experiments fail, how misaligned incentives warp product development, and what it takes to drive real impact with data science. He shares what teams get wrong about launches, why ego gets in the way of learning, and how Instagram turned Reels from a struggling product into a global success. A candid look at product, data, and decision-making inside one of the world’s most influential platforms.
LINKS
Roberto on LinkedIn (https://www.linkedin.com/in/robertomedri/)
High Signal podcast (https://high-signal.delphina.ai/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Fei-Fei Li—co-director of Stanford’s Human-Centered AI Institute and one of the most respected voices in the field—reflects on AI’s evolution from the early days of ImageNet to the rise of foundation models. She explains why spatial intelligence may be the next major shift, how human-centered design applies in practice, and why AI should be understood as a civilizational technology—one that shapes individuals, communities, and society at large.
LINKS
Stanford HAI (https://hai.stanford.edu/)
World Labs (https://www.worldlabs.ai/about)
"The World I See", Fei-Fei's book (a must read!) (https://us.macmillan.com/books/9781250897930/theworldsisee/)
Fei-Fei on X (https://x.com/drfeifei)
Fei-Fei on LinkedIn (https://www.linkedin.com/in/fei-fei-li-4541247/)
High Signal podcast (https://high-signal.delphina.ai/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Eoin O'Mahony—data science partner at Lightspeed, former Uber science lead, and one of the early architects of the system that kept NYC’s Citi Bikes available across the city—argues that positive metrics are meaningless if you don’t understand the mechanism behind them. At Uber, he was careful to make sure his launches both looked good on paper and made sense in practice. Now in venture, he’s applying that same rigor to unstructured data—using GenAI to scale a kind of work that’s long resisted systematization.
LINKS
Eoin's page at Lightspeed Ventures (https://lsvp.com/team-member/eoin-omahony/)
Ramesh Johari on How to Build an Experimentation Machine and Where Most Go Wrong (https://high-signal.delphina.ai/episode/ramesh-johari-on-how-to-build-an-experimentation-machine-and-where-most-go-wrong)
Chiara Farronato on Data Science Meets Management: Teamwork, Experimentation, and Decision-Making (https://high-signal.delphina.ai/episode/data-science-meets-management)
Delphina's Newsletter (https://delphinaai.substack.com/)
Barr Moses—co-founder and CEO of Monte Carlo—thinks we’re headed for an AI reckoning. Companies are building fast, but most are still managing data like it’s 2015. In this episode, she shares high-stakes failure stories (like a $100M schema change), explains why full-stack observability is becoming essential, and breaks down how LLM agents are already transforming data debugging. From culture to tooling, this is a sharp look at what real AI readiness requires—and why so few teams have it.
LINKS
2024 State of Reliable AI Survey – Monte Carlo (https://www.montecarlodata.com/blog-2024-state-of-reliable-ai-survey/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Unity’s $100M Data Error – Schema Change Gone Wrong (https://www.theregister.com/2021/11/11/unity_stock_plunge/)
Citibank’s $400M Fine for Risk Management Failures (https://www.reuters.com/article/us-citigroup-fine-idUSKBN26T0BK)
Google’s AI Recommends Adding Glue to Pizza (https://www.theverge.com/2024/5/23/24162896/google-ai-overview-hallucinations-glue-in-pizza)
Chevy Dealer’s AI Chatbot Agrees to Sell Tahoe for $1 (https://incidentdatabase.ai/cite/622/)
The AI Hierarchy of Needs by Monica Rogati (HackerNoon) (https://hackernoon.com/the-ai-hierarchy-of-needs-18f111fcc007)
Data Quality Fundamentals by Barr Moses, Lior Gavish, and Molly Vorwerck (O’Reilly) (https://www.oreilly.com/library/view/data-quality-fundamentals/9781098112035/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Tim O’Reilly—founder of O’Reilly Media and one of the most influential voices in tech—argues we’re not witnessing the end of programming, but the beginning of something far bigger. He draws on past computing revolutions to explore how AI is reshaping what it means to build software, why real breakthroughs come from the edge—not incumbents—and what it takes to learn, teach, and build responsibly in the age of AI.
LINKS
The End of Programming as We Know It by Tim <--- Read this! (https://www.oreilly.com/radar/the-end-of-programming-as-we-know-it/)
WTF? What’s the Future and Why It’s Up to Us (https://www.oreilly.com/tim/wtf-book.html)
The fundamental problem with Silicon Valley’s favorite growth strategy (https://qz.com/1540608/the-problem-with-silicon-valleys-obsession-with-blitzscaling-growth)
AI Engineering by Chip Huyen (https://www.oreilly.com/library/view/ai-engineering/9781098166298/)
Delphina's Newsletter (https://delphinaai.substack.com/)
Stefan Wager—Professor at Stanford and expert on causal machine learning—has worked with leading tech companies including Dropbox, Facebook, Google, and Uber. He challenges the widespread assumption that better predictions mean better decisions. Traditional machine learning excels at prediction, but is prediction really what your business needs? Stefan explores why predictive models alone often fail to answer critical “what-if” questions, how causal machine learning bridges this gap, and provides practical advice for how you can start applying causal ML at work.
LINKS
Stefan's Stanford Website (https://www.gsb.stanford.edu/faculty-research/faculty/stefan-wager)
Machine Learning and Economics, Stefan and Susan Athey's lectures for the Stanford Graduate School of Business (https://www.youtube.com/@stanfordgsb)
Causal Inference: A Statistical Learning Approach (WIP!) (https://web.stanford.edu/~swager/causal_inf_book.pdf)
Mastering ‘Metrics: The Path from Cause to Effect by Angrist & Pischke (https://www.masteringmetrics.com/)
The Book of Why: The New Science of Cause and Effect by Judea Pearl and Dana Mackenzie (https://en.wikipedia.org/wiki/The_Book_of_Why)
Causal Inference: The Mixtape by Scott Cunningham (https://mixtape.scunning.com/)
A Technical Primer On Causality by Adam Kelleher (https://medium.com/@akelleh/a-technical-primer-on-causality-181db2575e41)
What Is Causal Inference? An Introduction for Data Scientists by Hugo Bowne-Anderson and Mike Loukides (https://www.oreilly.com/radar/what-is-causal-inference/)
The Episode on YouTube (https://www.youtube.com/watch?v=f9_Lt5p8avU&feature=youtu.be)
Delphina's Newsletter (https://delphinaai.substack.com/)
Peter Wang—Chief AI Officer at Anaconda and a driving force behind PyData—challenges conventional thinking about AI’s role in software development. As AI reshapes engineering, are we moving beyond writing code to orchestrating intelligence? Peter explores why companies are fixated on models instead of integration, how AI is breaking traditional software workflows, and what this shift means for open source. He also shares insights on the evolving role of engineers, the commoditization of AI models, and the deeper questions we should be asking about the future of software.
LINKS
Peter Wang on LinkedIn (https://www.linkedin.com/in/pzwang/)
Anaconda (https://www.anaconda.com/)
Mistral Saba (https://mistral.ai/news/mistral-saba)
Peter chatting with Hugo several years ago about the beginnings of PyData, NUMFOCUS, and Python for Data Science (https://vanishinggradients.fireside.fm/7)
Delphina's Newsletter (https://delphinaai.substack.com/)
Ari Kaplan—Global Head of Evangelism at Databricks and a pioneer in sports analytics—explains why businesses fixated on AI often overlook the real advantage: making better decisions with their own data. He shares lessons from his work building analytics teams for Major League Baseball, advising McLaren’s F1 strategy, and helping companies apply AI where it actually works—without falling into hype-driven traps.
SHOW NOTES
Ari on LinkedIn (https://www.linkedin.com/in/arikaplan/)
The Data Intelligence Platform For Dummies by Ari and Stephanie Diamond (https://www.databricks.com/resources/ebook/maximize-your-organizations-potential-data-and-ai)
Databricks' AI/BI: Intelligent analytics for real-world data (https://www.databricks.com/product/ai-bi)
That time Ari spoke with Travis Kelce about how Travis and the Kansas City Chiefs use data and analytics! (https://www.linkedin.com/posts/arikaplan_wiley-databricks-genai-activity-7221214362575724545-RZwc/)




