DiscoverMachine Learning Street Talk (MLST)
Machine Learning Street Talk (MLST)
Claim Ownership

Machine Learning Street Talk (MLST)

Author: Machine Learning Street Talk (MLST)

Subscribed: 842Played: 21,850
Share

Description

Welcome! We engage in fascinating discussions with pre-eminent figures in the AI field. Our flagship show covers current affairs in AI, cognitive science, neuroscience and philosophy of mind with in-depth analysis. Our approach is unrivalled in terms of scope and rigour – we believe in intellectual diversity in AI, and we touch on all of the main ideas in the field with the hype surgically removed. MLST is run by Tim Scarfe, Ph.D (https://www.linkedin.com/in/ecsquizor/) and features regular appearances from MIT Doctor of Philosophy Keith Duggar (https://www.linkedin.com/in/dr-keith-duggar/).
167 Episodes
Reverse
Jürgen Schmidhuber, the father of generative AI shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe. YT version: https://youtu.be/DP454c1K_vQ MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC 00:00:00 Intro 00:03:38 Reasoning 00:13:09 Potential AI Breakthroughs Reducing Computation Needs 00:20:39 Memorization vs. Generalization in AI 00:25:19 Approach to the ARC Challenge 00:29:10 Perceptions of Chat GPT and AGI 00:58:45 Abstract Principles of Jurgen's Approach 01:04:17 Analogical Reasoning and Compression 01:05:48 Breakthroughs in 1991: the P, the G, and the T in ChatGPT and Generative AI 01:15:50 Use of LSTM in Language Models by Tech Giants 01:21:08 Neural Network Aspect Ratio Theory 01:26:53 Reinforcement Learning Without Explicit Teachers Refs: ★ "Annotated History of Modern AI and Deep Learning" (2022 survey by Schmidhuber): ★ Chain Rule For Backward Credit Assignment (Leibniz, 1676) ★ First Neural Net / Linear Regression / Shallow Learning (Gauss & Legendre, circa 1800) ★ First 20th Century Pioneer of Practical AI (Quevedo, 1914) ★ First Recurrent NN (RNN) Architecture (Lenz, Ising, 1920-1925) ★ AI Theory: Fundamental Limitations of Computation and Computation-Based AI (Gödel, 1931-34) ★ Unpublished ideas about evolving RNNs (Turing, 1948) ★ Multilayer Feedforward NN Without Deep Learning (Rosenblatt, 1958) ★ First Published Learning RNNs (Amari and others, ~1972) ★ First Deep Learning (Ivakhnenko & Lapa, 1965) ★ Deep Learning by Stochastic Gradient Descent (Amari, 1967-68) ★ ReLUs (Fukushima, 1969) ★ Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960) ★ Backpropagation for NNs (Werbos, 1982) ★ First Deep Convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988). ★ Metalearning or Learning to Learn (Schmidhuber, 1987) ★ Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, Feb 1990; see the G in Generative AI and ChatGPT) ★ NNs Learn to Generate Subgoals and Work on Command (Schmidhuber, April 1990) ★ NNs Learn to Program NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT) ★ Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT) ★ Experiments with Pre-Training; Analysis of Vanishing/Exploding Gradients, Roots of Long Short-Term Memory / Highway Nets / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber) ★ LSTM journal paper (1997, most cited AI paper of the 20th century) ★ xLSTM (Hochreiter, 2024) ★ Reinforcement Learning Prompt Engineer for Abstract Reasoning and Planning (Schmidhuber 2015) ★ Mindstorms in Natural Language-Based Societies of Mind (2023 paper by Schmidhuber's team) https://arxiv.org/abs/2305.17066 ★ Bremermann's physical limit of computation (1982) EXTERNAL LINKS CogX 2018 - Professor Juergen Schmidhuber https://www.youtube.com/watch?v=17shdT9-wuA Discovering Neural Nets with Low Kolmogorov Complexity and High Generalization Capability (Neural Networks, 1997) https://sferics.idsia.ch/pub/juergen/loconet.pdf The paradox at the heart of mathematics: Gödel's Incompleteness Theorem - Marcus du Sautoy https://www.youtube.com/watch?v=I4pQbo5MQOs (Refs truncated, full version on YT VD)
Professor Pedro Domingos, is an AI researcher and professor of computer science. He expresses skepticism about current AI regulation efforts and argues for faster AI development rather than slowing it down. He also discusses the need for new innovations to fulfil the promises of current AI techniques. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmented generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Show notes: * Domingos' views on AI regulation and why he believes it's misguided * His thoughts on the current state of AI technology and its limitations * Discussion of his novel "2040", a satirical take on AI and tech culture * Explanation of his work on "tensor logic", which aims to unify neural networks and symbolic AI * Critiques of other approaches in AI, including those of OpenAI and Gary Marcus * Thoughts on the AI "bubble" and potential future developments in the field Prof. Pedro Domingos: https://x.com/pmddomingos 2040: A Silicon Valley Satire [Pedro's new book] https://amzn.to/3T51ISd TOC: 00:00:00 Intro 00:06:31 Bio 00:08:40 Filmmaking skit 00:10:35 AI and the wisdom of crowds 00:19:49 Social Media 00:27:48 Master algorithm 00:30:48 Neurosymbolic AI / abstraction 00:39:01 Language 00:45:38 Chomsky 01:00:49 2040 Book 01:18:03 Satire as a shield for criticism? 01:29:12 AI Regulation 01:35:15 Gary Marcus 01:52:37 Copyright 01:56:11 Stochastic parrots come home to roost 02:00:03 Privacy 02:01:55 LLM ecosystem 02:05:06 Tensor logic Refs: The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World [Pedro Domingos] https://amzn.to/3MiWs9B Rebooting AI: Building Artificial Intelligence We Can Trust [Gary Marcus] https://amzn.to/3AAywvL Flash Boys [Michael Lewis] https://amzn.to/4dUGm1M
Andrew Ilyas, a PhD student at MIT who is about to start as a professor at CMU. We discuss Data modeling and understanding how datasets influence model predictions, Adversarial examples in machine learning and why they occur, Robustness in machine learning models, Black box attacks on machine learning systems, Biases in data collection and dataset creation, particularly in ImageNet and Self-selection bias in data and methods to address it. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api Andrew's site: https://andrewilyas.com/ https://x.com/andrew_ilyas TOC: 00:00:00 - Introduction and Andrew's background 00:03:52 - Overview of the machine learning pipeline 00:06:31 - Data modeling paper discussion 00:26:28 - TRAK: Evolution of data modeling work 00:43:58 - Discussion on abstraction, reasoning, and neural networks 00:53:16 - "Adversarial Examples Are Not Bugs, They Are Features" paper 01:03:24 - Types of features learned by neural networks 01:10:51 - Black box attacks paper 01:15:39 - Work on data collection and bias 01:25:48 - Future research plans and closing thoughts References: Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 TRAK: Attributing Model Behavior at Scale https://arxiv.org/pdf/2303.14186 Datamodels: Predicting Predictions from Training Data https://arxiv.org/pdf/2202.00622 Adversarial Examples Are Not Bugs, They Are Features https://arxiv.org/pdf/1905.02175 IMAGENET-TRAINED CNNS https://arxiv.org/pdf/1811.12231 ZOO: Zeroth Order Optimization Based Black-box https://arxiv.org/pdf/1708.03999 A Spline Theory of Deep Networks https://proceedings.mlr.press/v80/balestriero18b/balestriero18b.pdf Scaling Monosemanticity https://transformer-circuits.pub/2024/scaling-monosemanticity/ Adversarial Examples Are Not Bugs, They Are Features https://gradientscience.org/adv/ Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies https://proceedings.mlr.press/v235/bartoldson24a.html Prior Convictions: Black-Box Adversarial Attacks with Bandits and Priors https://arxiv.org/abs/1807.07978 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 From ImageNet to Image Classification: Contextualizing Progress on Benchmarks https://arxiv.org/abs/2005.11295 Estimation of Standard Auction Models https://arxiv.org/abs/2205.02060 What Makes A Good Fisherman? Linear Regression under Self-Selection Bias https://arxiv.org/abs/2205.03246 Towards Tracing Factual Knowledge in Language Models Back to the Training Data [Akyürek] https://arxiv.org/pdf/2205.11482
Dr. Joscha Bach introduces a surprising idea called "cyber animism" in his AGI-24 talk - the notion that nature might be full of self-organizing software agents, similar to the spirits in ancient belief systems. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Joscha takes us on a tour de force through history, philosophy, and cutting-edge computer science, teasing us to rethink what we know about minds, machines, and the world around us. Joscha believes we should blur the lines between human, artificial, and natural intelligence, and argues that consciousness might be more widespread and interconnected than we ever thought possible. Dr. Joscha Bach https://x.com/Plinz This is video 2/9 from our coverage of AGI-24 in Seattle https://agi-conf.org/2024/ Watch the official MLST interview with Joscha which we did right after this talk on our Patreon now on early access - https://www.patreon.com/posts/joscha-bach-110199676 (you also get access to our private discord and biweekly calls) TOC: 00:00:00 Introduction: AGI and Cyberanimism 00:03:57 The Nature of Consciousness 00:08:46 Aristotle's Concepts of Mind and Consciousness 00:13:23 The Hard Problem of Consciousness 00:16:17 Functional Definition of Consciousness 00:20:24 Comparing LLMs and Human Consciousness 00:26:52 Testing for Consciousness in AI Systems 00:30:00 Animism and Software Agents in Nature 00:37:02 Plant Consciousness and Ecosystem Intelligence 00:40:36 The California Institute for Machine Consciousness 00:44:52 Ethics of Conscious AI and Suffering 00:46:29 Philosophical Perspectives on Consciousness 00:49:55 Q&A: Formalisms for Conscious Systems 00:53:27 Coherence, Self-Organization, and Compute Resources YT version (very high quality, filmed by us live) https://youtu.be/34VOI_oo-qM Refs: Aristotle's work on the soul and consciousness Richard Dawkins' work on genes and evolution Gerald Edelman's concept of Neural Darwinism Thomas Metzinger's book "Being No One" Yoshua Bengio's concept of the "consciousness prior" Stuart Hameroff's theories on microtubules and consciousness Christof Koch's work on consciousness Daniel Dennett's "Cartesian Theater" concept Giulio Tononi's Integrated Information Theory Mike Levin's work on organismal intelligence The concept of animism in various cultures Freud's model of the mind Buddhist perspectives on consciousness and meditation The Genesis creation narrative (for its metaphorical interpretation) California Institute for Machine Consciousness
Prof Gary Marcus revisited his keynote from AGI-21, noting that many of the issues he highlighted then are still relevant today despite significant advances in AI. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Gary Marcus criticized current large language models (LLMs) and generative AI for their unreliability, tendency to hallucinate, and inability to truly understand concepts. Marcus argued that the AI field is experiencing diminishing returns with current approaches, particularly the "scaling hypothesis" that simply adding more data and compute will lead to AGI. He advocated for a hybrid approach to AI that combines deep learning with symbolic AI, emphasizing the need for systems with deeper conceptual understanding. Marcus highlighted the importance of developing AI with innate understanding of concepts like space, time, and causality. He expressed concern about the moral decline in Silicon Valley and the rush to deploy potentially harmful AI technologies without adequate safeguards. Marcus predicted a possible upcoming "AI winter" due to inflated valuations, lack of profitability, and overhyped promises in the industry. He stressed the need for better regulation of AI, including transparency in training data, full disclosure of testing, and independent auditing of AI systems. Marcus proposed the creation of national and global AI agencies to oversee the development and deployment of AI technologies. He concluded by emphasizing the importance of interdisciplinary collaboration, focusing on robust AI with deep understanding, and implementing smart, agile governance for AI and AGI. YT Version (very high quality filmed) https://youtu.be/91SK90SahHc Pre-order Gary's new book here: Taming Silicon Valley: How We Can Ensure That AI Works for Us https://amzn.to/4fO46pY Filmed at the AGI-24 conference: https://agi-conf.org/2024/ TOC: 00:00:00 Introduction 00:02:34 Introduction by Ben G 00:05:17 Gary Marcus begins talk 00:07:38 Critiquing current state of AI 00:12:21 Lack of progress on key AI challenges 00:16:05 Continued reliability issues with AI 00:19:54 Economic challenges for AI industry 00:25:11 Need for hybrid AI approaches 00:29:58 Moral decline in Silicon Valley 00:34:59 Risks of current generative AI 00:40:43 Need for AI regulation and governance 00:49:21 Concluding thoughts 00:54:38 Q&A: Cycles of AI hype and winters 01:00:10 Predicting a potential AI winter 01:02:46 Discussion on interdisciplinary approach 01:05:46 Question on regulating AI 01:07:27 Ben G's perspective on AI winter
DeepMind Research Scientist / MIT scholar Dr. Timothy Nguyen discusses his recent paper on understanding transformers through n-gram statistics. Nguyen explains his approach to analyzing transformer behavior using a kind of "template matching" (N-grams), providing insights into how these models process and predict language. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Key points covered include: A method for describing transformer predictions using n-gram statistics without relying on internal mechanisms. The discovery of a technique to detect overfitting in large language models without using holdout sets. Observations on curriculum learning, showing how transformers progress from simpler to more complex rules during training. Discussion of distance measures used in the analysis, particularly the variational distance. Exploration of model sizes, training dynamics, and their impact on the results. We also touch on philosophical aspects of describing versus explaining AI behavior, and the challenges in understanding the abstractions formed by neural networks. Nguyen concludes by discussing potential future research directions, including attempts to convert descriptions of transformer behavior into explanations of internal mechanisms. Timothy Nguyen's earned his B.S. and Ph.D. in mathematics from Caltech and MIT, respectively. He held positions as Research Assistant Professor at the Simons Center for Geometry and Physics (2011-2014) and Visiting Assistant Professor at Michigan State University (2014-2017). During this time, his research expanded into high-energy physics, focusing on mathematical problems in quantum field theory. His work notably provided a simplified and corrected formulation of perturbative path integrals. Since 2017, Nguyen has been working in industry, applying his expertise to machine learning. He is currently at DeepMind, where he contributes to both fundamental research and practical applications of deep learning to solve real-world problems. Refs: The Cartesian Cafe https://www.youtube.com/@TimothyNguyen Understanding Transformers via N-Gram Statistics https://www.researchgate.net/publication/382204056_Understanding_Transformers_via_N-Gram_Statistics TOC 00:00:00 Timothy Nguyen's background 00:02:50 Paper overview: transformers and n-gram statistics 00:04:55 Template matching and hash table approach 00:08:55 Comparing templates to transformer predictions 00:12:01 Describing vs explaining transformer behavior 00:15:36 Detecting overfitting without holdout sets 00:22:47 Curriculum learning in training 00:26:32 Distance measures in analysis 00:28:58 Model sizes and training dynamics 00:30:39 Future research directions 00:32:06 Conclusion and future topics
Jay Alammar, renowned AI educator and researcher at Cohere, discusses the latest developments in large language models (LLMs) and their applications in industry. Jay shares his expertise on retrieval augmented generation (RAG), semantic search, and the future of AI architectures. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Cohere Command R model series: https://cohere.com/command Jay Alamaar: https://x.com/jayalammar Buy Jay's new book here! Hands-On Large Language Models: Language Understanding and Generation https://amzn.to/4fzOUgh TOC: 00:00:00 Introduction to Jay Alammar and AI Education 00:01:47 Cohere's Approach to RAG and AI Re-ranking 00:07:15 Implementing AI in Enterprise: Challenges and Solutions 00:09:26 Jay's Role at Cohere and the Importance of Learning in Public 00:15:16 The Evolution of AI in Industry: From Deep Learning to LLMs 00:26:12 Expert Advice for Newcomers in Machine Learning 00:32:39 The Power of Semantic Search and Embeddings in AI Systems 00:37:59 Jay Alammar's Journey as an AI Educator and Visualizer 00:43:36 Visual Learning in AI: Making Complex Concepts Accessible 00:47:38 Strategies for Keeping Up with Rapid AI Advancements 00:49:12 The Future of Transformer Models and AI Architectures 00:51:40 Evolution of the Transformer: From 2017 to Present 00:54:19 Preview of Jay's Upcoming Book on Large Language Models Disclaimer: This is the fourth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Note also that this combines several previously unpublished interviews from Jay into one, the earlier one at Tim's house was shot in Aug 2023, and the more recent one in Toronto in May 2024. Refs: The Illustrated Transformer https://jalammar.github.io/illustrated-transformer/ Attention Is All You Need https://arxiv.org/abs/1706.03762 The Unreasonable Effectiveness of Recurrent Neural Networks http://karpathy.github.io/2015/05/21/rnn-effectiveness/ Neural Networks in 11 Lines of Code https://iamtrask.github.io/2015/07/12/basic-python-network/ Understanding LSTM Networks (Chris Olah's blog post) http://colah.github.io/posts/2015-08-Understanding-LSTMs/ Luis Serrano's YouTube Channel https://www.youtube.com/channel/UCgBncpylJ1kiVaPyP-PZauQ Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks https://arxiv.org/abs/1908.10084 GPT (Generative Pre-trained Transformer) models https://jalammar.github.io/illustrated-gpt2/ https://openai.com/research/gpt-4 BERT (Bidirectional Encoder Representations from Transformers) https://jalammar.github.io/illustrated-bert/ https://arxiv.org/abs/1810.04805 RoPE (Rotary Positional Encoding) https://arxiv.org/abs/2104.09864 (Linked paper discussing rotary embeddings) Grouped Query Attention https://arxiv.org/pdf/2305.13245 RLHF (Reinforcement Learning from Human Feedback) https://openai.com/research/learning-from-human-preferences https://arxiv.org/abs/1706.03741 DPO (Direct Preference Optimization) https://arxiv.org/abs/2305.18290
Daniel Cahn, co-founder of Slingshot AI, on the potential of AI in therapy. Why is anxiety and depression affecting a large population? To what extent are these real categories? Why is the mental health getting worse? How often do you want an AI to agree with you? What are the ethics of persuasive AI? You will discover all in this conversation. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. Daniel Cahn (who is also hiring ML engineers by the way!) https://x.com/thecahnartist?lang=en / cahnd https://thinkingmachinespodcast.com/ TOC: 00:00:00 Intro 00:01:56 Therapy effectiveness vs drugs and societal implications 00:04:02 Mental health categories: Iatrogenesis and social constructs 00:10:19 Psychiatric treatment models and cognitive perspectives 00:13:30 AI design and human-like interactions: Intentionality debates 00:20:04 AI in therapy: Ethics, anthropomorphism, and loneliness mitigation 00:28:13 Therapy efficacy: Neuroplasticity, suffering, and AI placebos 00:33:29 AI's impact on human agency and cognitive modeling 00:41:17 Social media's effects on brain structure and behavior 00:50:46 AI ethics: Altering values and free will considerations 01:00:00 Work value perception and personal identity formation 01:13:37 Free will, agency, and mutable personal identity in therapy 01:24:27 AI in healthcare: Challenges, ethics, and therapy improvements 01:53:25 AI development: Societal impacts and cultural implications Full references on YT VD: https://www.youtube.com/watch?v=7hwX6OZyNC0 (and baked into mp3 metadata)
Prof. Subbarao Kambhampati argues that while LLMs are impressive and useful tools, especially for creative tasks, they have fundamental limitations in logical reasoning and cannot provide guarantees about the correctness of their outputs. He advocates for hybrid approaches that combine LLMs with external verification systems. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api. TOC (sorry the ones baked into the MP3 were wrong apropos due to LLM hallucination!) [00:00:00] Intro [00:02:06] Bio [00:03:02] LLMs are n-gram models on steroids [00:07:26] Is natural language a formal language? [00:08:34] Natural language is formal? [00:11:01] Do LLMs reason? [00:19:13] Definition of reasoning [00:31:40] Creativity in reasoning [00:50:27] Chollet's ARC challenge [01:01:31] Can we reason without verification? [01:10:00] LLMs cant solve some tasks [01:19:07] LLM Modulo framework [01:29:26] Future trends of architecture [01:34:48] Future research directions Youtube version: https://www.youtube.com/watch?v=y1WnHpedi2A Refs: (we didn't have space for URLs here, check YT video description instead) Can LLMs Really Reason and Plan? On the Planning Abilities of Large Language Models : A Critical Investigation Chain of Thoughtlessness? An Analysis of CoT in Planning On the Self-Verification Limitations of Large Language Models on Reasoning and Planning Tasks LLMs Can't Plan, But Can Help Planning in LLM-Modulo Frameworks Embers of Autoregression: Understanding Large Language Models Through the Problem They are Trained to Solve "Task Success" is not Enough Partition function (number theory) (Srinivasa Ramanujan and G.H. Hardy's work) Poincaré conjecture Gödel's incompleteness theorems ROT13 (Rotate13, "rotate by 13 places") A Mathematical Theory of Communication (C. E. SHANNON) Sparks of AGI Kambhampati thesis on speech recognition (1983) PlanBench: An Extensible Benchmark for Evaluating Large Language Models on Planning and Reasoning about Change Explainable human-AI interaction Tree of Thoughts On the Measure of Intelligence (ARC Challenge) Getting 50% (SoTA) on ARC-AGI with GPT-4o (Ryan Greenblatt ARC solution) PROGRAMS WITH COMMON SENSE (John McCarthy) - "AI should be an advice taker program" Original chain of thought paper ICAPS 2024 Keynote: Dale Schuurmans on "Computing and Planning with Large Generative Models" (COT) The Hardware Lottery (Hooker) A Path Towards Autonomous Machine Intelligence (JEPA/LeCun) AlphaGeometry FunSearch Emergent Abilities of Large Language Models Language models are not naysayers (Negation in LLMs) The Reversal Curse: LLMs trained on "A is B" fail to learn "B is A" Embracing negative results
How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at brave.com/api. Sayash Kapoor is a computer science Ph.D. candidate at Princeton University's Center for Information Technology Policy. His research focuses on the societal impact of AI. Kapoor has previously worked on AI in both industry and academia, with experience at Facebook, Columbia University, and EPFL Switzerland. He is a recipient of a best paper award at ACM FAccT and an impact recognition award at ACM CSCW. Notably, Kapoor was included in TIME's inaugural list of the 100 most influential people in AI. Sayash Kapoor https://x.com/sayashk https://www.cs.princeton.edu/~sayashk/ Arvind Narayanan (other half of the AI Snake Oil duo) https://x.com/random_walker AI existential risk probabilities are too unreliable to inform policy https://www.aisnakeoil.com/p/ai-existential-risk-probabilities Pre-order AI Snake Oil Book https://amzn.to/4fq2HGb AI Snake Oil blog https://www.aisnakeoil.com/ AI Agents That Matter https://arxiv.org/abs/2407.01502 Shortcut learning in deep neural networks https://www.semanticscholar.org/paper/Shortcut-learning-in-deep-neural-networks-Geirhos-Jacobsen/1b04936c2599e59b120f743fbb30df2eed3fd782 77% Of Employees Report AI Has Increased Workloads And Hampered Productivity, Study Finds https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/ TOC: 00:00:00 Intro 00:01:57 How seriously should we take Xrisk threat? 00:02:55 Risk too unrealiable to inform policy 00:10:20 Overinflated risks 00:12:05 Perils of utility maximisation 00:13:55 Scaling vs airplane speeds 00:17:31 Shift to smaller models? 00:19:08 Commercial LLM ecosystem 00:22:10 Synthetic data 00:24:09 Is AI complexifying our jobs? 00:25:50 Does ChatGPT make us dumber or smarter? 00:26:55 Are AI Agents overhyped? 00:28:12 Simple vs complex baselines 00:30:00 Cost tradeoff in agent design 00:32:30 Model eval vs downastream perf 00:36:49 Shortcuts in metrics 00:40:09 Standardisation of agent evals 00:41:21 Humans in the loop 00:43:54 Levels of agent generality 00:47:25 ARC challenge
Sara Hooker is VP of Research at Cohere and leader of Cohere for AI. We discuss her recent paper critiquing the use of compute thresholds, measured in FLOPs (floating point operations), as an AI governance strategy. We explore why this approach, recently adopted in both US and EU AI policies, may be problematic and oversimplified. Sara explains the limitations of using raw computational power as a measure of AI capability or risk, and discusses the complex relationship between compute, data, and model architecture. Equally important, we go into Sara's work on "The AI Language Gap." This research highlights the challenges and inequalities in developing AI systems that work across multiple languages. Sara discusses how current AI models, predominantly trained on English and a handful of high-resource languages, fail to serve the linguistic diversity of our global population. We explore the technical, ethical, and societal implications of this gap, and discuss potential solutions for creating more inclusive and representative AI systems. We broadly discuss the relationship between language, culture, and AI capabilities, as well as the ethical considerations in AI development and deployment. YT Version: https://youtu.be/dBZp47999Ko TOC: [00:00:00] Intro [00:02:12] FLOPS paper [00:26:42] Hardware lottery [00:30:22] The Language gap [00:33:25] Safety [00:38:31] Emergent [00:41:23] Creativity [00:43:40] Long tail [00:44:26] LLMs and society [00:45:36] Model bias [00:48:51] Language and capabilities [00:52:27] Ethical frameworks and RLHF Sara Hooker https://www.sarahooker.me/ https://www.linkedin.com/in/sararosehooker/ https://scholar.google.com/citations?user=2xy6h3sAAAAJ&hl=en https://x.com/sarahookr Interviewer: Tim Scarfe Refs The AI Language gap https://cohere.com/research/papers/the-AI-language-gap.pdf On the Limitations of Compute Thresholds as a Governance Strategy. https://arxiv.org/pdf/2407.05694v1 The Multilingual Alignment Prism: Aligning Global and Local Preferences to Reduce Harm https://arxiv.org/pdf/2406.18682 Cohere Aya https://cohere.com/research/aya RLHF Can Speak Many Languages: Unlocking Multilingual Preference Optimization for LLMs https://arxiv.org/pdf/2407.02552 Back to Basics: Revisiting REINFORCE Style Optimization for Learning from Human Feedback in LLMs https://arxiv.org/pdf/2402.14740 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/ EU AI Act https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf The bitter lesson http://www.incompleteideas.net/IncIdeas/BitterLesson.html Neel Nanda interview https://www.youtube.com/watch?v=_Ygf0GnlwmY Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet https://transformer-circuits.pub/2024/scaling-monosemanticity/ Chollet's ARC challenge https://github.com/fchollet/ARC-AGI Ryan Greenblatt on ARC https://www.youtube.com/watch?v=z9j3wB1RRGA Disclaimer: This is the third video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.
Murray Shanahan is a professor of Cognitive Robotics at Imperial College London and a senior research scientist at DeepMind. He challenges our assumptions about AI consciousness and urges us to rethink how we talk about machine intelligence. We explore the dangers of anthropomorphizing AI, the limitations of current language in describing AI capabilities, and the fascinating intersection of philosophy and artificial intelligence. Show notes and full references: https://docs.google.com/document/d/1ICtBI574W-xGi8Z2ZtUNeKWiOiGZ_DRsp9EnyYAISws/edit?usp=sharing Prof Murray Shanahan: https://www.doc.ic.ac.uk/~mpsha/ (look at his selected publications) https://scholar.google.co.uk/citations?user=00bnGpAAAAAJ&hl=en https://en.wikipedia.org/wiki/Murray_Shanahan https://x.com/mpshanahan Interviewer: Dr. Tim Scarfe Refs (links in the Google doc linked above): Role play with large language models Waluigi effect "Conscious Exotica" - Paper by Murray Shanahan (2016) "Simulators" - Article by Janis from LessWrong "Embodiment and the Inner Life" - Book by Murray Shanahan (2010) "The Technological Singularity" - Book by Murray Shanahan (2015) "Simulacra as Conscious Exotica" - Paper by Murray Shanahan (newer paper of the original focussed on LLMs) A recent paper by Anthropic on using autoencoders to find features in language models (referring to the "Scaling Monosemanticity" paper) Work by Peter Godfrey-Smith on octopus consciousness "Metaphors We Live By" - Book by George Lakoff (1980s) Work by Aaron Sloman on the concept of "space of possible minds" (1984 article mentioned) Wittgenstein's "Philosophical Investigations" (posthumously published) Daniel Dennett's work on the "intentional stance" Alan Turing's original paper on the Turing Test (1950) Thomas Nagel's paper "What is it like to be a bat?" (1974) John Searle's Chinese Room Argument (mentioned but not detailed) Work by Richard Evans on tackling reasoning problems Claude Shannon's quote on knowledge and control "Are We Bodies or Souls?" - Book by Richard Swinburne Reference to work by Ethan Perez and others at Anthropic on potential deceptive behavior in language models Reference to a paper by Murray Shanahan and Antonia Creswell on the "selection inference framework" Mention of work by Francois Chollet, particularly the ARC (Abstraction and Reasoning Corpus) challenge Reference to Elizabeth Spelke's work on core knowledge in infants Mention of Karl Friston's work on planning as inference (active inference) The film "Ex Machina" - Murray Shanahan was the scientific advisor "The Waluigi Effect" Anthropic's constitutional AI approach Loom system by Lara Reynolds and Kyle McDonald for visualizing conversation trees DeepMind's AlphaGo (mentioned multiple times as an example) Mention of the "Golden Gate Claude" experiment Reference to an interview Tim Scarfe conducted with University of Toronto students about self-attention controllability theorem Mention of an interview with Irina Rish Reference to an interview Tim Scarfe conducted with Daniel Dennett Reference to an interview with Maria Santa Caterina Mention of an interview with Philip Goff Nick Chater and Martin Christianson's book ("The Language Game: How Improvisation Created Language and Changed the World") Peter Singer's work from 1975 on ascribing moral status to conscious beings Demis Hassabis' discussion on the "ladder of creativity" Reference to B.F. Skinner and behaviorism
David Chalmers - Reality+

David Chalmers - Reality+

2024-07-0801:17:57

In the coming decades, the technology that enables virtual and augmented reality will improve beyond recognition. Within a century, world-renowned philosopher David J. Chalmers predicts, we will have virtual worlds that are impossible to distinguish from non-virtual worlds. But is virtual reality just escapism? In a highly original work of 'technophilosophy', Chalmers argues categorically, no: virtual reality is genuine reality. Virtual worlds are not second-class worlds. We can live a meaningful life in virtual reality - and increasingly, we will. What is reality, anyway? How can we lead a good life? Is there a god? How do we know there's an external world - and how do we know we're not living in a computer simulation? In Reality+, Chalmers conducts a grand tour of philosophy, using cutting-edge technology to provide invigorating new answers to age-old questions. David J. Chalmers is an Australian philosopher and cognitive scientist specializing in the areas of philosophy of mind and philosophy of language. He is Professor of Philosophy and Neural Science at New York University, as well as co-director of NYU's Center for Mind, Brain, and Consciousness. Chalmers is best known for his work on consciousness, including his formulation of the "hard problem of consciousness." Reality+: Virtual Worlds and the Problems of Philosophy https://amzn.to/3RYyGD2 https://consc.net/ https://x.com/davidchalmers42 00:00:00 Reality+ Intro 00:12:02 GPT conscious? 10/10 00:14:19 The consciousness processor thought experiment (11/10) 00:20:34 Intelligence and Consciousness entangled? 10/10 00:22:44 Karl Friston / Meta Problem 10/10 00:29:05 Knowledge argument / subjective experience (6/10) 00:32:34 Emergence 11/10 (best chapter) 00:42:45 Working with Douglas Hofstadter 10/10 00:46:14 Intelligence is analogy making? 10/10 00:50:47 Intelligence explosion 8/10 00:58:44 Hypercomputation 10/10 01:09:44 Who designed the designer? (7/10) 01:13:57 Experience machine (7/10)
Ryan Greenblatt from Redwood Research recently published "Getting 50% on ARC-AGI with GPT-4.0," where he used GPT4o to reach a state-of-the-art accuracy on Francois Chollet's ARC Challenge by generating many Python programs. Sponsor: Sign up to Kalshi here https://kalshi.onelink.me/1r91/mlst -- the first 500 traders who deposit $100 will get a free $20 credit! Important disclaimer - In case it's not obvious - this is basically gambling and a *high risk* activity - only trade what you can afford to lose. We discuss: - Ryan's unique approach to solving the ARC Challenge and achieving impressive results. - The strengths and weaknesses of current AI models. - How AI and humans differ in learning and reasoning. - Combining various techniques to create smarter AI systems. - The potential risks and future advancements in AI, including the idea of agentic AI. https://x.com/RyanPGreenblatt https://www.redwoodresearch.org/ Refs: Getting 50% (SoTA) on ARC-AGI with GPT-4o [Ryan Greenblatt] https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt On the Measure of Intelligence [Chollet] https://arxiv.org/abs/1911.01547 Connectionism and Cognitive Architecture: A Critical Analysis [Jerry A. Fodor and Zenon W. Pylyshyn] https://ruccs.rutgers.edu/images/personal-zenon-pylyshyn/proseminars/Proseminar13/ConnectionistArchitecture.pdf Software 2.0 [Andrej Karpathy] https://karpathy.medium.com/software-2-0-a64152b37c35 Why Greatness Cannot Be Planned: The Myth of the Objective [Kenneth Stanley] https://amzn.to/3Wfy2E0 Biographical account of Terence Tao’s mathematical development. [M.A.(KEN) CLEMENTS] https://gwern.net/doc/iq/high/smpy/1984-clements.pdf Model Evaluation and Threat Research (METR) https://metr.org/ Why Tool AIs Want to Be Agent AIs https://gwern.net/tool-ai Simulators - Janus https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators AI Control: Improving Safety Despite Intentional Subversion https://www.lesswrong.com/posts/d9FJHawgkiMSPjagR/ai-control-improving-safety-despite-intentional-subversion https://arxiv.org/abs/2312.06942 What a Compute-Centric Framework Says About Takeoff Speeds https://www.openphilanthropy.org/research/what-a-compute-centric-framework-says-about-takeoff-speeds/ Global GDP over the long run https://ourworldindata.org/grapher/global-gdp-over-the-long-run?yScale=log Safety Cases: How to Justify the Safety of Advanced AI Systems https://arxiv.org/abs/2403.10462 The Danger of a “Safety Case" http://sunnyday.mit.edu/The-Danger-of-a-Safety-Case.pdf The Future Of Work Looks Like A UPS Truck (~02:15:50) https://www.npr.org/sections/money/2014/05/02/308640135/episode-536-the-future-of-work-looks-like-a-ups-truck SWE-bench https://www.swebench.com/ Using DeepSpeed and Megatron to Train Megatron-Turing NLG 530B, A Large-Scale Generative Language Model https://arxiv.org/pdf/2201.11990 Algorithmic Progress in Language Models https://epochai.org/blog/algorithmic-progress-in-language-models
Aidan Gomez, CEO of Cohere, reveals how they're tackling AI hallucinations and improving reasoning abilities. He also explains why Cohere doesn't use any output from GPT-4 for training their models. Aidan shares his personal insights into the world of AI and LLMs and Cohere's unique approach to solving real-world business problems, and how their models are set apart from the competition. Aidan reveals how they are making major strides in AI technology, discussing everything from last mile customer engineering to the robustness of prompts and future architectures. He also touches on the broader implications of AI for society, including potential risks and the role of regulation. He discusses Cohere's guiding principles and the health the of startup scene. With a particular focus on enterprise applications. Aidan provides a rare look into the internal workings of Cohere and their vision for driving productivity and innovation. https://cohere.com/ https://x.com/aidangomez Check out Cohere's amazing new Command R* models here https://cohere.com/command Disclaimer: This is the second video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview.
The ARC Challenge, created by Francois Chollet, tests how well AI systems can generalize from a few examples in a grid-based intelligence test. We interview the current winners of the ARC Challenge—Jack Cole, Mohammed Osman and their collaborator Michael Hodel. They discuss how they tackled ARC (Abstraction and Reasoning Corpus) using language models. We also discuss the new "50%" public set approach announced today from Redwood Research (Ryan Greenblatt). Jack and Mohammed explain their winning approach, which involves fine-tuning a language model on a large, specifically-generated dataset and then doing additional fine-tuning at test-time, a technique known in this context as "active inference". They use various strategies to represent the data for the language model and believe that with further improvements, the accuracy could reach above 50%. Michael talks about his work on generating new ARC-like tasks to help train the models. They also debate whether their methods stay true to the "spirit" of Chollet's measure of intelligence. Despite some concerns, they agree that their solutions are promising and adaptable for other similar problems. Note: Jack's team is still the current official winner at 33% on the private set. Ryan's entry is not on the private leaderboard or eligible. Chollet invented ARC in 2019 (not 2017 as stated) "Ryan's entry is not a new state of the art. We don't know exactly how well it does since it was only evaluated on 100 tasks from the evaluation set and does 50% on those, reportedly. Meanwhile Jacks team i.e. MindsAI's solution does 54% on the entire eval set and it is seemingly possible to do 60-70% with an ensemble" Jack Cole: https://x.com/Jcole75Cole https://lab42.global/community-interview-jack-cole/ Mohamed Osman: Mohamed is looking to do a PhD in AI/ML, can you help him? Email: mothman198@outlook.com https://www.linkedin.com/in/mohamedosman1905/ Michael Hodel: https://arxiv.org/pdf/2404.07353v1 https://www.linkedin.com/in/michael-hodel/ https://x.com/bayesilicon https://github.com/michaelhodel Getting 50% (SoTA) on ARC-AGI with GPT-4o - Ryan Greenblatt https://redwoodresearch.substack.com/p/getting-50-sota-on-arc-agi-with-gpt Neural networks for abstraction and reasoning: Towards broad generalization in machines [Mikel Bober-Irizar, Soumya Banerjee] https://arxiv.org/pdf/2402.03507 Measure of intelligence: https://arxiv.org/abs/1911.01547 YT version: https://youtu.be/jSAT_RuJ_Cg
Nick Frosst, co-founder of Cohere, on the future of LLMs, and AGI. Learn how Cohere is solving real problems for business with their new AI models. This is the first podcast from our new Cohere partnership! Nick talks about his journey at Google Brain, working with AI legends like Geoff Hinton, and the amazing things his company, Cohere, is doing. From creating the must useful language models for businesses to making tools for developers, Nick shares a lot of interesting insights. He even talks about his band, Good Kid! Nick said that RAG is one of the best features of Cohere's new Command R* models. We are about to release a deep-dive on RAG with Patrick Lewis from Cohere, keep an eye out for that - he explains why their models are specifically optimised for RAG use cases. Learn more about Cohere Command R* models here: https://cohere.com/commandhttps://github.com/cohere-ai/cohere-toolkit Nick's band Good Kid: https://goodkidofficial.com/ Nick on Twitter: https://x.com/nickfrosst Disclaimer: We are in a partnership with Cohere to release content for them. We were not told what to say in the interview, and didn't edit anything out from the interview. We are currently planning to release 2 shows per month under the partnership about their AI platform, research and strategy.
These two scientists have mapped out the insides or “reachable space” of a language model using control theory, what they discovered was extremely surprising. Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening. https://patreon.com/mlst YT version: https://youtu.be/Bpgloy1dDn0 Aman Bhargava from Caltech and Cameron Witkowski from the University of Toronto to discuss their groundbreaking paper, “What’s the Magic Word? A Control Theory of LLM Prompting.” (the main theorem on self-attention controllability was developed in collaboration with Dr. Shi-Zhuo Looi from Caltech). They frame LLM systems as discrete stochastic dynamical systems. This means they look at LLMs in a structured way, similar to how we analyze control systems in engineering. They explore the “reachable set” of outputs for an LLM. Essentially, this is the range of possible outputs the model can generate from a given starting point when influenced by different prompts. The research highlights that prompt engineering, or optimizing the input tokens, can significantly influence LLM outputs. They show that even short prompts can drastically alter the likelihood of specific outputs. Aman and Cameron’s work might be a boon for understanding and improving LLMs. They suggest that a deeper exploration of control theory concepts could lead to more reliable and capable language models. We dropped an additional, more technical video on the research on our Twitter account here: https://x.com/MLStreetTalk/status/1795093759471890606 Additional 20 minutes of unreleased footage on our Patreon here: https://www.patreon.com/posts/whats-magic-word-104922629 What's the Magic Word? A Control Theory of LLM Prompting (Aman Bhargava, Cameron Witkowski, Manav Shah, Matt Thomson) https://arxiv.org/abs/2310.04444 LLM Control Theory Seminar (April 2024) https://www.youtube.com/watch?v=9QtS9sVBFM0 Society for the pursuit of AGI (Cameron founded it) https://agisociety.mydurable.com/ Roger Federer demo http://conway.languagegame.io/inference Neural Cellular Automata, Active Inference, and the Mystery of Biological Computation (Aman) https://aman-bhargava.com/ai/neuro/neuromorphic/2024/03/25/nca-do-active-inference.html Aman and Cameron also want to thank Dr. Shi-Zhuo Looi and Prof. Matt Thomson from from Caltech for help and advice on their research. (https://thomsonlab.caltech.edu/ and https://pma.caltech.edu/people/looi-shi-zhuo) https://x.com/ABhargava2000 https://x.com/witkowski_cam
Maria Santacaterina, with her background in the humanities, brings a critical perspective on the current state and future implications of AI technology, its impact on society, and the nature of human intelligence and creativity. She emphasizes that despite technological advancements, AI lacks fundamental human traits such as consciousness, empathy, intuition, and the ability to engage in genuine creative processes. Maria argues that AI, at its core, processes data but does not have the capability to understand or generate new, intrinsic meaning or ideas as humans do. Throughout the conversation, Maria highlights her concern about the overreliance on AI in critical sectors such as healthcare, the justice system, and business. She stresses that while AI can serve as a tool, it should not replace human judgment and decision-making. Maria points out that AI systems often operate on past data, which may lead to outdated or incorrect decisions if not carefully managed. The discussion also touches upon the concept of "adaptive resilience", which Maria describes in her book. She explains adaptive resilience as the capacity for individuals and enterprises to evolve and thrive amidst challenges by leveraging technology responsibly, without undermining human values and capabilities. A significant portion of the conversation focussed on ethical considerations surrounding AI. Tim and Maria agree that there's a pressing need for strong governance and ethical frameworks to guide AI development and deployment. They discuss how AI, without proper ethical considerations, risks exacerbating issues like privacy invasion, misinformation, and unintended discrimination. Maria is skeptical about claims of achieving Artificial General Intelligence (AGI) or a technological singularity where machines surpass human intelligence in all aspects. She argues that such scenarios neglect the complex, dynamic nature of human intelligence and consciousness, which cannot be fully replicated or replaced by machines. Tim and Maria discuss the importance of keeping human agency and creativity at the forefront of technology development. Maria asserts that efforts to automate or standardize complex human actions and decisions are misguided and could lead to dehumanizing outcomes. They both advocate for using AI as an aid to enhance human capabilities rather than a substitute. In closing, Maria encourages a balanced approach to AI adoption, urging stakeholders to prioritize human well-being, ethical standards, and societal benefit above mere technological advancement. The conversation ends with Maria pointing people to her book for more in-depth analysis and thoughts on the future interaction between humans and technology. Buy Maria's book here: https://amzn.to/4avF6kq https://www.linkedin.com/in/mariasantacaterina TOC 00:00:00 - Intro to Book 00:03:23 - What Life Is 00:10:10 - Agency 00:18:04 - Tech and Society 00:21:51 - System 1 and 2 00:22:59 - We Are Being Pigeonholed 00:30:22 - Agency vs Autonomy 00:36:37 - Explanations 00:40:24 - AI Reductionism 00:49:50 - How Are Humans Intelligent 01:00:22 - Semantics 01:01:53 - Emotive AI and Pavlovian Dogs 01:04:05 - Technology, Social Media and Organisation 01:18:34 - Systems Are Not That Automated 01:19:33 - Hiring 01:22:34 - Subjectivity in Orgs 01:32:28 - The AGI Delusion 01:45:37 - GPT-laziness Syndrome 01:54:58 - Diversity Preservation 01:58:24 - Ethics 02:11:43 - Moral Realism 02:16:17 - Utopia 02:18:02 - Reciprocity 02:20:52 - Tyranny of Categorisation
Thomas Parr and his collaborators wrote a book titled "Active Inference: The Free Energy Principle in Mind, Brain and Behavior" which introduces Active Inference from both a high-level conceptual perspective and a low-level mechanistic, mathematical perspective. Active inference, developed by the legendary neuroscientist Prof. Karl Friston - is a unifying mathematical framework which frames living systems as agents which minimize surprise and free energy in order to resist entropy and persist over time. It unifies various perspectives from physics, biology, statistics, and psychology - and allows us to explore deep questions about agency, biology, causality, modelling, and consciousness. Buy Active Inference: The Free Energy Principle in Mind, Brain, and Behavior https://amzn.to/4dj0iMj YT version: https://youtu.be/lbb-Si5wa_o Please support us on Patreon to get access to the private Discord server, bi-weekly calls, early access and ad-free listening. https://patreon.com/mlst Chapters should be embedded in the mp3, let me me know if issues
loading