DiscoverNo Priors: Artificial Intelligence | Machine Learning | Technology | Startups
No Priors: Artificial Intelligence | Machine Learning | Technology | Startups
Claim Ownership

No Priors: Artificial Intelligence | Machine Learning | Technology | Startups

Author: Conviction | Pod People

Subscribed: 8,584Played: 6,525
Share

Description

At this moment of inflection in technology, co-hosts Elad Gil and Sarah Guo talk to the world's leading AI engineers, researchers and founders about the biggest questions: How far away is AGI? What markets are at risk for disruption? How will commerce, culture, and society change? What’s happening in state-of-the-art in research? “No Priors” is your guide to the AI revolution. Email feedback to show@no-priors.com.

Sarah Guo is a startup investor and the founder of Conviction, an investment firm purpose-built to serve intelligent software, or "Software 3.0" companies. She spent nearly a decade incubating and investing at venture firm Greylock Partners.

Elad Gil is a serial entrepreneur and a startup investor. He was co-founder of Color Health, Mixer Labs (which was acquired by Twitter). He has invested in over 40 companies now worth $1B or more each, and is also author of the High Growth Handbook.

19 Episodes
Reverse
In this episode, Sarah and Elad speak with Microsoft CTO Kevin Scott about his unlikely journey from rural Virginia to becoming the driving force behind Microsoft's AI strategy.  Sarah and Elad discuss the partnership that Kevin helped forge between Microsoft and OpenAI and explore the vision both companies have for the future of AI. They also discuss yesterday’s announcement of “copilots” across the Microsoft product suite, Microsoft’s GPU computing budget, the potential impact of open source AI models in the tech industry, the future of AI in relation to jobs, why Kevin is bullish on creative and physical work, and predictions for progress in AI this year. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: May 23, 2023: The Verge - Microsoft CTO Kevin Scott Thinks Sydney Might Make a Comeback May 23, 2023: Microsoft Outlines Framework For Building AI Apps and Copilots January 10, 2023: A Conversation with Kevin Scott: What’s Next In AI Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @kevin_scott Show Notes: [00:00] - Kevin Scott's Journey to Microsoft CTO [12:44] - Microsoft and Open AI Partnership [21:18] - The Future of Open Source AI [32:12] - AI for Everyone [45:29] - AI and the Future of Jobs [51:44] - The Future of AI and Regulation [58:10] - Taking a Global Perspective
What if AI could revolutionize healthcare with advanced language learning models? Sarah and Elad welcome Karan Singhal, Staff Software Engineer at Google Research, who specializes in medical AI and the development of MedPaLM2. On this episode, Karan emphasizes the importance of safety in medical AI applications and how language models like MedPaLM2 have the potential to augment scientific workflows and transform the standard of care. Other topics include the best workflows for AI integration, the potential impact of AI on drug discoveries, how AI can serve as a physician's assistant, and how privacy-preserving machine learning and federated learning can protect patient data, while pushing the boundaries of medical innovation. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: May 10, 2023: PaLM 2 Announcement April 13, 2023: A Responsible Path to Generative AI in Healthcare March 31, 2023: Scientific American article on Med-PaLM February 28, 2023: The Economist article on Med-PaLM KaranSinghal.com Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @thekaransinghal Show Notes: [00:22] - Google's Medical AI Development [08:57] - Medical Language Model and MedPaLM 2 Improvements [18:18] - Safety, cost/benefit decisions, drug discovery, health information, AI applications, and AI as a physician's assistant. [24:51] - Privacy Concerns - HIPAA's implications, privacy-preserving machine learning, and advances in GPT-4 and MedPOM2. [37:43] - Large Language Models in Healthcare and short/long term use.
Mustafa Suleyman, co-founder of DeepMind and now co-founder and CEO of Inflection AI, joins Sarah and Elad to discuss how his interests in counseling, conflict resolution, and intelligence led him to start an AI lab that pioneered deep reinforcement learning, lead applied AI and policy efforts at Google, and more recently found Inflection and launch Pi. Mustafa offers insights on the changing structure of the web, the pressure Google faces in the age of AI personalization, predictions for model architectures, how to measure emotional intelligence in AIs, and the thinking behind Pi: the AI companion that knows you, is aligned to your interests, and provides companionship. Sarah and Elad also discuss Mustafa’s upcoming book, The Coming Wave (release September 12, 2023), which examines the political ramifications of AI and digital biology revolutions. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Forbes - Startup From Reid Hoffman and Mustafa Suleyman Debuts ChatBot Inflection.ai Mustafa-Suleyman.ai Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @mustafasuleymn Show Notes: [00:06] - From Conflict Resolution to AI Pioneering [10:36] - Defining Intelligence [15:32] - DeepMind's Journey and Breakthroughs [24:45] - The Future of Personal AI Companionship [33:22] - AI and the Future of Personalized Content [41:49] - The Launch of Pi [51:12] - Mustafa’s New Book The Coming Wave
How do you personalize AI models? A popular school of thought in AI is to just dump all the data you need into pre-training or fine tuning. But that may be less efficient and less controllable than alternatives — using AI models as a reasoning engine against external data sources. Kelvin Guu, Senior Staff Research Scientist at Google, joins Sarah and Elad this week to talk about retrieval, memory, training data attribution and model orchestration. At Google, he led some of the first efforts to leverage pre-trained LMs and neural retrievers, with >30 launches across multiple products. He has done some of the earliest work on retrieval-augmented language models (REALM) and training LLMs to follow instructions (FLAN). No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Kelvin Guu Website Google Scholar FLAN: Finetuned Language Models Are Zero-Shot Learners Simfluence: Modeling the Influence of Individual Training Examples by Simulating Training Runs ROME: Locating and Editing Factual Associations in GPT Branch-Train-Merge: Scaling Expert Language Models with Unsupervised Domain Discovery Large Language Models Struggle to Learn Long-Tail Knowledge  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Kelvin_Guu Show Notes: [1:44] - Kelvin’s background in math, statistics and natural language processing at Stanford [3:24] - The questions driving the REALM Paper [7:08] - Frameworks around retrieval augmentation & expert models [10:16] - Why is modularity important [11:36] - FLAN Paper and instruction following [13:28] - Updating model weights in real time and other continuous learning methods [15:08] - Simfluence Paper & explainability with large language models [18:11] - ROME paper, “Model Surgery” exciting research areas [19:51] - Personal opinions and thoughts on AI agents & research [24:59] - How the human brain compares to AGI regarding memory and emotions [28:08] - How models become more contextually available [30:45] - Accessibility of models [33:47] - Advice to future researchers
This week on No Priors, Sarah and Elad answer listener questions about tech and AI. Topics covered include the evolution of open-source models, Elon AI, regulating AI, areas of opportunity, and AI hype in the investing environment. Sarah and Elad also delve into the impact of AI on drug development and healthcare, and the balance between regulation and innovation. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil Show Notes:  [0:00:06] - The March of Progress for Open Source Foundation Models  [0:06:00] - Should AI Be Regulated? [0:13:49] - Investing in AI and Exploring the AI Opportunity Landscape [0:23:28] - The Impact of Regulation on Innovation [0:31:55] - AI in Healthcare and Biotech
So much of the AI conversation today revolves around models and new applications. But this AI revolution would not be possible without one thing – GPUs, Nvidia GPUs. The Nvidia A100 is the workhorse of today’s AI ecosystem. This week on No Priors, Sarah Guo and Elad Gil sit down with Jensen Huang, the founder and CEO of NVIDIA, at their Santa Clara headquarters. Jensen co-founded the company in 1993 with a goal to create chips that accelerated graphics. Over the past thirty years, NVIDIA has gone far behind gaming and become a $674B behemoth. Jensen talks about the meaning of this broader platform shift for developers, making very long term bets in areas such as climate and biopharma, their next-gen Hopper chip, why and how NVIDIA chooses problems that are unsolvable today, and the source of his iconic leather jackets. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Jensen Huang | NVIDIA Nvidia's A100 is the $10,000 chip powering the race for A.I. | CNBC Nvidia CEO Jensen Huang: A.I. is at ‘inflection point’ | Fortune Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Nvidia Show Notes:  [1:26] - The early days when Jensen Co-founded NVIDIA [4:58] - Why NVIDIA started to expand its aperture to artificial intelligence use cases  [10:42] - The moment in 2012 Jensen realized AI was going to be huge [13:52] - How we’re in a broader platform shift in computer science [17:48] - His vision for NVIDIA’s future lines of business [18:09] - How NVIDIA has two motions: Shipping reliable chips and solving new use cases  [25:41] - Why no one should assume they’re right for the job of CEO and why not every company needs to be architected as the US military  [31:39] - What’s next for NVIDIA’s Hopper  [32:57] - Durability of Transformers  [35:08] - What Jensen is excited about in the future of AI & his advice for founders
Noam Shazeer played a key role in developing key foundations of modern AI - including co-inventing Transformers at Google, as well as pioneering AI chat pre-chatGPT. These are the foundations supporting today’s AI revolution. On this episode of No Priors, Noam discusses his work as an AI researcher, engineer, inventor, and now CEO.  Noam Shazeer is currently the CEO and Co-founder of Character AI, a service that allows users to design and interact with their own personal bots that take on the personalities of well-known individuals or archetypes. You could have a socratic conversation with Socrates. You could pretend you’re being interviewed by Oprah. Or you could work through a life decision with a therapist bot. Character recently raised $150M from A16Z, Elad Gil, and others. Noam talks about his early AI adventures at Google, why he started Character, and what he sees on the horizon of AI development. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Noam Shazeer - Google Scholar Noam Shazeer - Chief Executive Officer - Character.AI | LinkedIn  Character.AI Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Character_ai Show Notes:  [1:50] - Noam’s early AI projects at Google [7:13] - Noam’s focus on language models and AI applications [11:13] - Character’s co-founder Daniel de Freitas Adiwardana work on Google’s Lambda [13:53] - The origin story of Character.AI  [18:47] - How AI can express emotions [26:51] - What Noam looks for in new hires
If you have 30 dollars, a few hours, and one server, then you are ready to create a ChatGPT-like model that can do what’s known as instruction-following. Databricks’ latest launch, Dolly, foreshadows a potential move in the industry toward smaller and more accessible but extremely capable AIs. Plus, Dolly is open source, requires less computing power, and fewer data parameters than its counterparts. Matei Zaharia, Cofounder & Chief Technologist at Databricks, joins Sarah and Elad to talk about how big data sets actually need to be, why manual annotation is becoming less necessary to train some models, and how he went from a Berkeley PhD student with a little project called Spark to the founder of a company that is now critical data infrastructure that’s increasingly moving into AI. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Hello Dolly: Democratizing the magic of ChatGPT with open models Dolly Source Code on Github Matei Zaharia - Chief Technologist & Cofounder - Databricks | LinkedIn Matei Zaharia - Google Scholar Databricks debuts ChatGPT-like Dolly, a clone any enterprise can own | VentureBeat Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Databricks | @Matei_Zaharia Show Notes:  [01:29] - Origin of Databricks [4:30] - Work at Stanford Lab [5:29] - Dolly and Role of Open Source [12:30] - Industry focus on high parameter count, understanding reasoning at small model scale [18:42] - Enterprise applications for Dolly & chat bots [25:06] - Making bets as an academic turned CTO [36:23] - The early stages of AI and future predictions
Everyone talks about the future impact of AI, but there’s already an AI product that has revolutionized a profession. Alex Graveley was the principal engineer and Chief Architect behind Github Copilot, a sort of pair-programmer that auto-completes your code as you type. It has rapidly become a product that developers won’t live without, and the most leaned-upon analogy for every new AI startup – Copilot for Finance, Sales, Marketing, Support, Writing, Decision-Making. Alex is a longtime hacker and tinkerer, open source contributor, repeat founder, and creator of products that millions of people use, such as Dropbox Paper. He has a new project in stealth, Minion AI. In this episode, we talk about the uncertain process of shipping Copilot, how code improves chain of thought for LLMs, how they improved product, performance, how people are using it, AI agents that can do work for us, stress testing society's resilience to waves of new technology, and his new startup named Minion. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Alex Graveley - San Francisco, California, United States | Professional Profile | LinkedIn Minion AI Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @alexgraveley | @ai_minion Show Notes: [1:50] - How Alex got started in technology  [2:28] - Alex’s earlier projects with Hack Pad and Dropbox Paper [07:32] - Why Alex always wanted to make bots that did stuff for people [11:56] - How Alex started working at Github and Copilot [27:11] - What is Minion AI [30:30] - What’s possible on the horizon of AI
With advances in machine learning, the way we search for information online will never be the same. This week on the No Priors podcast, we dive into a startup that aims to be the most trustworthy place to search for information online. Perplexity.ai is a search engine that provides answers to questions in a conversational way and hints at what the future of search might look like. Aravind Srinivas is a Co-founder and CEO of Perplexity. He is a former research scientist at Open AI and completed his PhD in computer science at University of California Berkeley. Denis Yarats is a Co-Founder and Perplexity’s CTO. He has a background in machine learning, having worked as a Research Scientist at Facebook AI Research and a machine learning engineer at Quora. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: Aravind Srinivas on Google Scholar Denis Yarats on Google Scholar Perplexity AI Perplexity AI Discord AI Chatbots Are Coming to Search Engines. Can You Trust Them? - Scientific American Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @AravSrinivas | @denisyarats Show Notes:  [1:46] - How Perplexity AI iterates quickly and how the company has changed over time [5:46] - Approach to hiring and building a fast-paced team [10:43] - Why you don’t need AI pedigree to transition to work or research AI [14:01] - Challenges when transitioning from AI research to running a company as CEO & CTO [16:50] - Why Perplexity only shows answers it can cite [19:33] - How Perplexity approaches reinforcement learning [20:49] - Trustworthiness and if an answer engine needs a personality [23:05] - Why answer engines will become their own market segment [26:38] - Implications of “the era of fewer clicks” on publishers and advertisers [30:20] - Monetization strategy [33:20] - Advice for those deciding between academia or startups
For the first time in decades web search might be at risk for disruption. Bing is allied with OpenAI to integrate LLMs. Google has committed to launching new products. New startups are emerging. Sridhar Ramaswamy co-founded the challenger AI-powered, private search platform Neeva in 2019. He is a former 16-year Google veteran who most recently led the internet’s most profitable business as SVP in charge of Google Ads, Commerce and Privacy. Sridhar, Elad and Sarah talk about the challenge of building search, how LLMs have changed the landscape, and how chatbots and "answer services" will affect web publishers. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: LinkedIn Neeva Search Neeva Gist Poe by Quora Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @RamaswmySridhar Show Notes:  [1:32] - Why Sridhar started a private search engine after leaving Google [11:11] - Information Retrieval Problems, Mapping Search Queries and LLMs [15:25] - Google and Bing’s approach to search with LLMs [19:06] - Scale challenges when building a search engine startup [22:26] - Distribution challenges and why they release Neeva Gist [24:11] - Why Neeva is a privacy centric subscription service  [28:25] - The relationship between search and publishers/content creators [30:16] - Sridhar’s predictions on how AI will disrupt current ecosystems
When AI research is evolving at warp speed and takes significant capital and compute power, what is the role of academia? Dr. Percy Liang – Stanford computer science professor and director of the Stanford Center for Research on Foundation Models talks about training costs, distributed infrastructure, model evaluation, alignment, and societal impact. Sarah Guo and Elad Gil join Percy at his office to discuss the evolution of research in NLP, why AI developers should aim for superhuman levels of performance, the goals of the Center for Research on Foundation Models, and Together, a decentralized cloud for artificial intelligence. No Priors is now on YouTube! Subscribe to the channel on YouTube and like this episode. Show Links: See Percy’s Research on Google Scholar See Percy’s bio on Stanford’s website Percy on Stanford’s Blog: What to Expect in 2023 in AI Together, a decentralized cloud for artificial intelligence Foundation AI models GPT-3 and DALL-E need release standards - Protocol The Time Is Now to Develop Community Norms for the Release of Foundation Models - Stanford Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @PercyLiang Show Notes:  [1:44] - How Percy got into machine learning research and started the Center for Research and Foundation Models at Stanford [7:23] - The role of academia and academia’s competitive advantages [13:30] - Research on natural language processing and computational semantics [27:20] - Smaller scale architectures that are competitive with transformers [35:08] - Helm, holistic evaluation of language models, a project with the the goal is to evaluate language models [42:13] - Together, a decentralized cloud for artificial intelligence
Life-saving therapeutics continue to grow more costly to discover. At the same time, recent advances in using machine learning for the life sciences and medicine are extraordinary. Are we on the verge of a paradigm shift in biotech? This week on the podcast, a pioneer in AI, Daphne Koller, joins Sarah Guo and Elad Gil on the podcast to help us explore that question. Daphne is the CEO and founder of Insitro — a company that applies machine learning to pharma discovery and development, specifically by leveraging “induced pluripotent stem cells.” We explain Insitro’s approach, why they’re focused on generating their own data, why you can’t cure schizophrenia in mice, and how to design a culture that supports both research and engineering. Daphne was previously a computer science professor at Stanford, and co-founder and co-CEO of edutech company Coursera. Show Links:  Insitro - About  Video: AWS re:Invent 2019 – Daphne Koller of insitro Talks About Using AWS to Transform Drug Development  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @DaphneKoller Show Notes:  [1:49] - How Daphne combined her biology and tech interests and ran a bifurcated lab at Stanford [4:34] - Why Daphne resigned an endowed chair at Stanford to build Coursera  [14:14] - How insitro approaches target identification problems and training data  [18:33] - What are pluripotent stem cells and how insitro identifies individual neurons  [24:08 ] - How insitro operates as an engine for drug discovery and partners to create the drugs themselves [26:48] - Role of regulations, clinical trials and disease progression in drug delivery  [33:19] - Building a team and workplace culture that can bridge both bio and computer sciences  [39:50] - What Daphne is paying attention to in the so-called golden age of machine learning   [43:12] - Advice for leading a startup in edtech and healthtech
After starting as a talking emoji companion, Hugging Face is now an organizing force for the open source AI research ecosystem. Its models are used by companies such as Apple, Salesforce and Microsoft, and it's working to become the GitHub for ML. This week on the podcast, Sarah Guo and Elad Gil talk to Clem Delangue, co-founder and CEO of Hugging Face. Clem shares how they shifted away from their original product, why every employee at Hugging Face is responsible for community-building, the modalities he's most interested in, and what role open source has in the AI race. Show Links: Hugging Face website The $2 Billion Emoji: Hugging Face Wants To Be Launchpad For A Machine Learning Revolution - Forbes Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ClementDelangue Show Notes:  [01:53] - how Clem first became interested in ML, being shouted at by eBay sellers, and the foretelling of the end of barcode scanning [3:34] - early iterations of Hugging Face, trying to make a less boring AI tamagotchi, and switching directions towards open source tools [5:36] - advice for founders considering a change in direction, 30%+ experimentation [7:39] - 1st users, MLTwitter, approach to community [10:47] - enterprise ML maturity, days to production [12:54] - open source vs. proprietary models [15:56] - main model tasks, architectures and sizes [19:12] - decentralized infrastructure, data opt out [24:16] - Hugging Face’s business model, GitHub [28:09] - What Clem is excited about in AI
This is a special bonus episode from our Founder Stories series, where entrepreneurs share the story of their startup journey. A delivery with Zipline is the closest thing we have to teleportation. It sounds like science fiction, but Zipline delivers life saving medical supplies such as blood and vaccines to hospitals, doctors and people in need around the world with the world's largest autonomous drone network. This week on the podcast, Sarah Guo talks to Keller Rinaudo Cliffton, the co-founder and CEO of Zipline, about building a full-stack business that involves software, hardware and operations, how a culture of ruthless engineering practicality enabled them to do unlikely things, the state of autopilot in aircraft, their AI acoustic detect-and-avoid system, and why founders should build for users beyond the "golden billion." Show Links: Zipline's website Video: Drone Delivery Start-Up Zipline Beats Amazon, UPS And FedEx To The Punch | CNBC Keller Rinaudo: How we're using drones to deliver blood and save lives | TED Talk Meet Romotive: An Ambitious Startup That Blew Our Minds Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @KellerRinaudo Show Notes:  [2:07] - Keller’s earlier projects and early inspiration for Zipline and transforming logistics  [7:40] - Why Zipline focused on healthcare logistics and Zipline’s early near death experiences as a company  [15:32] - How Zipline iterated on the hardware while being ruthlessly practical with getting products in the customers’ hands  [21:52] - The difference between AI and Autopilot [25:51] - How Zipline developed AI acoustic-based detect and avoid system [31:30] - Zipline’s partnership with Rwanda’s public health system  [34:25] - Challenges in the business model
AI-generated images have been everywhere over the past year, but one company has fueled an explosive developer ecosystem around large image models: Stability AI. Stability builds open AI tools with a mission to improve humanity. Stability AI is most known for Stable Diffusion, the AI model where a user puts in a natural language prompt and the AI generates images. But they're also engaged in progressing models in natural language, voice, video, and biology. This week on the podcast, Emad Mostaque joins Sarah Guo and Elad Gil to talk about how this barely one-year-old, London-based company has changed the AI landscape, scaling laws, progress in different modalities, frameworks for AI safety and why the future of AI is open. Show Links: Stability.AI Stable Diffusion V2 on Hugging Face  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @EMostaque Show Notes:  [2:00] - Emad’s background as one of the largest investors in video games and artificial intelligence [7:24] - Open-source efforts in AI [13:09] - Stability.AI as the only independent multimodal AI company in the world [15:28] - Computational biology, medical information and medical models [23:29] - Pace of Adoption [26:31] - AGI versus intelligence augmentation [31:38] - Stability.AI’s business model [37:44] - AI Safety
For a long time, AI-generated images and video felt like a fun toy. Cool, but not something that would bring value to professional content creators. But now we are at the exciting moment where machine learning tools have the power to unlock more creative ideas. This week on the podcast, Sarah Guo and Elad Gil talk to Cristobal Valenzuela, a technologist, artist and software developer. He’s also the CEO and co-founder of Runway, a web-based tool that allows creatives to use machine learning to generate and edit video. You've probably already seen Runway's work in action on the Late Show with Stephen Colbert and in the feature film Everything Everywhere All at Once. Show Links: Watch Cris Valenzuela’s 2018 thesis presentation at New York University’s ITP program. Read how Runway is used on the Late Show and in Everything Everywhere All at Once on the Runway Blog. Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @c_valenzuelab Show Notes:  [1:50] - Cris’s background and how he doesn’t see barriers between art and machine learning [6:46] - How Runway works as a tool [8:36] - The origins and early iterations of Runway [12:22] - Product sequencing and roadmapping in a fast growing space [15:43] - Runway as an applied research company [19:10] - Common pitfalls for founders to avoid [22:35] - How Runway structures teams for effective collaboration [24:22] - Learnings from how Runway built Greenscreen product [28:01] - Building a long-term and sustainable business [32:34] - Finding Product Market Fit [36:34] - The influence of AI tools in art as an artistic movement
AGI can beat top players in chess, poker, and, now, Diplomacy. In November 2022, a bot named Cicero demonstrated mastery in this game, which requires natural language negotiation and cooperation with humans. In short, Cicero can lie, scheme, build trust, pass as human, and ally with humans. So what does that mean for the future of AGI? This week’s guest is research scientist Noam Brown. He co-created Cicero on the Meta Fundamental AI Research Team, and is considered one of the smartest engineers and researchers working in AI today. Co-hosts Sarah Guo and Elad Gil talk to Noam about why all research should be high risk, high reward, the timeline until we have AGI agents negotiating with humans, why scaling isn’t the only path to breakthroughs in AI, and if the Turing Test is still relevant. Show Links: More about Noam Brown Read the research article about Cicero (diplomacy) published in Science.  Read the research article about Liberatus  (heads-up poker) published in Science.  Read the research article about Pluribus (multiplayer poker) published in Science.  Watch the AlphaGo Documentary. Read “How Smart Are the Robots Getting?” by New York Times reporter Cade Metz  Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @Polynoamial Show Notes:  [01:43] - What sparked Noam’s interest in researching AI that could defeat games [6:00] - How the AlexaNET and AlphaGo changed the landscape of AI research [8:09] - Why Noam chose Diplomacy as the next game to work on after poker [9:51] - What Diplomacy is and why the game was so challenging for an AI bot [14:50] - Algorithmic breakthroughs and significance of AI bots that win in No-Limit Texas Hold'em poker [23:29] - The Nash Equilibrium and optimal play in poker [24:53] - How Cicero interacted with humans  [27:58] - The relevance and usefulness of the Turing Test [31:05] - The data set used to train Cicero [31:54] - Bottlenecks to AI researchers and challenges with scaling [40:10] - The next frontier in researching games for AI [42:55] - Domains that humans will still dominate and applications for AI bots in the real world [48:13] - Reasoning challenges with AI
AI is transforming our future, but what does that really mean? In ten years, will humans be forced to please our AGI overlords or will we have unlocked unlimited capacity for human potential? That's why Sarah Guo and Elad Gil started this new podcast, named No Priors. In each episode, Sarah and Elad talk with the leading engineers, researchers and founders in AI, across the stack. We'll talk about the technical state of the art, how that impacts business, and get them to predict what's next. Follow the podcast wherever you listen so you never miss an episode. We’ll see you next week with a new episode. Email feedback to show@no-priors.com
Comments 
Download from Google Play
Download from App Store