DiscoverMLOps.community
MLOps.community
Claim Ownership

MLOps.community

Author: Demetrios Brinkmann

Subscribed: 238Played: 13,680
Share

Description

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.
396 Episodes
Reverse
Krishna Sridhar is an experienced engineering leader passionate about building wonderful products powered by machine learning. Efficient Deployment of Models at the Edge // MLOps Podcast #283 with Krishna Sridhar, Vice President of Qualcomm. Big shout out to Qualcomm for sponsoring this episode! // Abstract Qualcomm® AI Hub helps to optimize, validate, and deploy machine learning models on-device for vision, audio, and speech use cases. With Qualcomm® AI Hub, you can: Convert trained models from frameworks like PyTorch and ONNX for optimized on-device performance on Qualcomm® devices. Profile models on-device to obtain detailed metrics including runtime, load time, and compute unit utilization. Verify numerical correctness by performing on-device inference. Easily deploy models using Qualcomm® AI Engine Direct, TensorFlow Lite, or ONNX Runtime. The Qualcomm® AI Hub Models repository contains a collection of example models that use Qualcomm® AI Hub to optimize, validate, and deploy models on Qualcomm® devices. Qualcomm® AI Hub automatically handles model translation from source framework to device runtime, applying hardware-aware optimizations, and performs physical performance/numerical validation. The system automatically provisions devices in the cloud for on-device profiling and inference. The following image shows the steps taken to analyze a model using Qualcomm® AI Hub. // Bio Krishna Sridhar leads engineering for Qualcomm™ AI Hub, a system used by more than 10,000 AI developers spanning 1,000 companies to run more than 100,000 models on Qualcomm platforms. Prior to joining Qualcomm, he was Co-founder and CEO of Tetra AI which made its easy to efficiently deploy ML models on mobile/edge hardware. Prior to Tetra AI, Krishna helped design Apple's CoreML which was a software system mission critical to running several experiences at Apple including Camera, Photos, Siri, FaceTime, Watch, and many more across all major Apple device operating systems and all hardware and IP blocks. He has a Ph.D. in computer science from the University of Wisconsin-Madison, and a bachelor’s degree in computer science from Birla Institute of Technology and Science, Pilani, India. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://www.linkedin.com/in/srikris/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Krishna on LinkedIn: https://www.linkedin.com/in/srikris/
Machine Learning, AI Agents, and Autonomy // MLOps Podcast #283 with Zach Wallace, Staff Software Engineer at Nearpod Inc. // Abstract Demetrios chats with Zach Wallace, engineering manager at Nearpod, about integrating AI agents in e-commerce and edtech. They discuss using agents for personalized user targeting, adapting AI models with real-time data, and ensuring efficiency through clear task definitions. Zach shares how Nearpod streamlined data integration with tools like Redshift and DBT, enabling real-time updates. The conversation covers challenges like maintaining AI in production, handling high-quality data, and meeting regulatory standards. Zach also highlights the cost-efficiency framework for deploying and decommissioning agents and the transformative potential of LLMs in education. // Bio Software Engineer with 10 years of experience. Started my career as an Application Engineer, but I have transformed into a Platform Engineer. As a Platform Engineer, I have handled the problems described below - Localization across 6-7 different languages - Building a custom local environment tool for our engineers - Building a Data Platform - Building standards and interfaces for Agentic AI within ed-tech. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links https://medium.com/renaissance-learning-r-d/data-platform-transform-a-data-monolith-9d5290a552ef --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Zach on LinkedIn: https://www.linkedin.com/in/zachary-wallace/
Since three years, Egor is bringing the power of AI to bear at Wise, across domains as varied as trading algorithms for Treasury, fraud detection, experiment analysis and causal inference, and recently the numerous applications unlocked by large language models. Open-source projects initiated and guided by Egor include wise-pizza, causaltune, and neural-lifetimes, with more on the way. Machine Learning, AI Agents, and Autonomy // MLOps Podcast #282 with Egor Kraev, Head of AI at Wise Plc. // Abstract Demetrios chats with Egor Kraev, principal AI scientist at Wise, about integrating large language models (LLMs) to enhance ML pipelines and humanize data interactions. Egor discusses his open-source MotleyCrew framework, career journey, and insights into AI's role in fintech, highlighting its potential to streamline operations and transform organizations. // Bio Egor first learned mathematics in the Russian tradition, then continued his studies at ETH Zurich and the University of Maryland. Egor has been doing data science since last century, including economic and human development data analysis for nonprofits in the US, the UK, and Ghana, and 10 years as a quant, solutions architect, and occasional trader at UBS then Deutsche Bank. Following last decade's explosion in AI techniques, Egor became Head of AI at Mosaic Smart Data Ltd, and for the last four years is bringing the power of AI to bear at Wise, in a variety of domains, from fraud detection to trading algorithms and causal inference for A/B testing and marketing. Egor has multiple side projects such as RL for molecular optimization, GenAI for generating and solving high school math problems, and others. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links https://github.com/transferwise/wise-pizza https://github.com/py-why/causaltune https://www.linkedin.com/posts/egorkraev_a-talk-on-experimentation-best-practices-activity-7092158531247755265-q0kt?utm_source=share&utm_medium=member_desktop --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Egor on LinkedIn: https://www.linkedin.com/in/egorkraev/
Re-Platforming Your Tech Stack // MLOps Podcast #281 with Michelle Marie Conway, Lead Data Scientist at Lloyds Banking Group and Andrew Baker, Data Science Delivery Lead at Lloyds Banking Group. // Abstract Lloyds Banking Group is on a mission to embrace the power of cloud and unlock the opportunities that it provides. Andrew, Michelle, and their MLOps team have been on a journey over the last 12 months to take their portfolio of circa 10 Machine Learning models in production and migrate them from an on-prem solution to a cloud-based environment. During the podcast, Michelle and Andrew share their reflections as well as some dos (and don’ts!) of managing the migration of an established portfolio. // Bio Michelle Marie Conway Michelle is a Lead Data Scientist in the high-performance data science team at Lloyds Banking Group. With deep expertise in managing production-level Python code and machine learning models, she has worked alongside fellow senior manager Andrew to drive the bank's transition to the Google Cloud Platform. Together, they have played a pivotal role in modernising the ML portfolio in collaboration with a remarkable ML Ops team. Originally from Ireland and now based in London, Michelle blends her technical expertise with a love for the arts. Andrew Baker Andrew graduated from the University of Birmingham with a first-class honours degree in Mathematics and Music with a Year in Computer Science and joined Lloyds Banking Group on their Retail graduate scheme in 2015. Since 2021 Andrew has worked in the world of data, firstly in shaping the Retail data strategy and most recently as a Data Science Delivery Lead, growing and managing a team of Data Scientists and Machine Learning Engineers. He has built a high-performing team responsible for building and maintaining ML models in production for the Consumer Lending division of the bank. Andrew is motivated by the role that data science and ML can play in transforming the business and its processes, and is focused on balancing the power of ML with the need for simplicity and explainability that enables business users to engage with the opportunities that exist in this space and the demands of a highly regulated environment. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://www.michelleconway.co.uk/ https://www.linkedin.com/pulse/artificial-intelligence-just-when-data-science-answer-andrew-baker-hfdge/ https://www.linkedin.com/pulse/artificial-intelligence-conundrum-generative-ai-andrew-baker-qla7e/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michelle on LinkedIn: https://www.linkedin.com/in/michelle--conway/ Connect with Andrew on LinkedIn: https://www.linkedin.com/in/andrew-baker-90952289
Jineet Doshi is an award-winning Scientist, Machine Learning Engineer, and Leader at Intuit with over 7 years of experience. He has a proven track record of leading successful AI projects and building machine-learning models from design to production across various domains which have impacted 100 million customers and significantly improved business metrics, leading to millions of dollars of impact. Holistic Evaluation of Generative AI Systems // MLOps Podcast #280 with Jineet Doshi, Staff AI Scientist or AI Lead at Intuit. // Abstract Evaluating LLMs is essential in establishing trust before deploying them to production. Even post deployment, evaluation is essential to ensure LLM outputs meet expectations, making it a foundational part of LLMOps. However, evaluating LLMs remains an open problem. Unlike traditional machine learning models, LLMs can perform a wide variety of tasks such as writing poems, Q&A, summarization etc. This leads to the question how do you evaluate a system with such broad intelligence capabilities? This talk covers the various approaches for evaluating LLMs such as classic NLP techniques, red teaming and newer ones like using LLMs as a judge, along with the pros and cons of each. The talk includes evaluation of complex GenAI systems like RAG and Agents. It also covers evaluating LLMs for safety and security and the need to have a holistic approach for evaluating these very capable models. // Bio Jineet Doshi is an award winning AI Lead and Engineer with over 7 years of experience. He has a proven track record of leading successful AI projects and building machine learning models from design to production across various domains, which have impacted millions of customers and have significantly improved business metrics, leading to millions of dollars of impact. He is currently an AI Lead at Intuit where he is one of the architects and developers of their Generative AI platform, which is serving Generative AI experiences for more than 100 million customers around the world. Jineet is also a guest lecturer at Stanford University as part of their building LLM Applications class. He is on the Advisory Board of University of San Francisco’s AI Program. He holds multiple patents in the field, is on the steering committee of MLOps World Conference and has also co chaired workshops at top AI conferences like KDD. He holds a Masters degree from Carnegie Mellon university. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://www.intuit.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Jineet on LinkedIn: https://www.linkedin.com/in/jineetdoshi/
Robert Caulk is responsible for directing software development, enabling research, coordinating company projects, quality control, proposing external collaborations, and securing funding. He believes firmly in open-source, having spent 12 years accruing over 1000 academic citations building open-source software in domains such as machine learning, image analysis, and coupled physical processes. He received his Ph.D. from Université Grenoble Alpes, France, in computational mechanics. Unleashing Unconstrained News Knowledge Graphs to Combat Misinformation // MLOps Podcast #279 with Robert Caulk, Founder of Emergent Methods. // Abstract Indexing hundreds of thousands of news articles per day into a knowledge graph (KG) was previously impossible due to the strict requirement that high-level reasoning, general world knowledge, and full-text context *must* be present for proper KG construction. The latest tools now enable such general world knowledge and reasoning to be applied cost effectively to high-volumes of news articles. Beyond the low cost of processing these news articles, these tools are also opening up a new, controversial, approach to KG building - unconstrained KGs. We discuss the construction and exploration of the largest news-knowledge-graph on the planet - hosted on an endpoint at AskNews.app. During talk we aim to highlight some of the sacrifices and benefits that go hand-in-hand with using the infamous unconstrained KG approach. We conclude the talk by explaining how knowledge graphs like these help to mitigate misinformation. We provide some examples of how our clients are using this graph, such as generating sports forecasts, generating better social media posts, generating regional security alerts, and combating human trafficking. // Bio Robert is the founder of Emergent Methods, where he directs research and software development for large-scale applications. He is currently overseeing the structuring of hundreds of thousands of news articles per day in order to build the best news retrieval API in the world: https://asknews.app. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://emergentmethods.ai News Retrieval API: https://asknews.app --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Rob on LinkedIn: https://www.linkedin.com/in/rcaulk/ Timestamps: [00:00] Rob's preferred coffee [00:05] Takeaways [00:55] Please like, share, leave a review, and subscribe to our MLOps channels! [01:00] Join our Local Organizer Carousel! [02:15] Knowledge Graphs and ontology [07:43] Ontology vs Noun Approach [12:46] Ephemeral tools for efficiency [17:26] Oracle to PostgreSQL migration [22:20] MEM Graph life cycle [29:14] Knowledge Graph Investigation Insights [33:37] Fine-tuning and distillation of LLMs [39:28] DAG workflow and quality control [46:23] Crawling nodes with Phi 3 Llama [50:05] AI pricing risks and strategies [56:14] Data labeling and poisoning [58:34] API costs vs News latency [1:02:10] Product focus and value [1:04:52] Ensuring reliable information [1:11:01] Podcast transcripts as News [1:13:08] Ontology trade-offs explained [1:15:00] Wrap up
Guanhua Wang is a Senior Researcher in DeepSpeed Team at Microsoft. Before Microsoft, Guanhua earned his Computer Science PhD from UC Berkeley. Domino: Communication-Free LLM Training Engine // MLOps Podcast #278 with Guanhua "Alex" Wang, Senior Researcher at Microsoft. // Abstract Given the popularity of generative AI, Large Language Models (LLMs) often consume hundreds or thousands of GPUs to parallelize and accelerate the training process. Communication overhead becomes more pronounced when training LLMs at scale. To eliminate communication overhead in distributed LLM training, we propose Domino, which provides a generic scheme to hide communication behind computation. By breaking the data dependency of a single batch training into smaller independent pieces, Domino pipelines these independent pieces of training and provides a generic strategy of fine-grained communication and computation overlapping. Extensive results show that compared with Megatron-LM, Domino achieves up to 1.3x speedup for LLM training on Nvidia DGX-H100 GPUs. // Bio Guanhua Wang is a Senior Researcher in the DeepSpeed team at Microsoft. His research focuses on large-scale LLM training and serving. Previously, he led the ZeRO++ project at Microsoft which helped reduce over half of model training time inside Microsoft and Linkedin. He also led and was a major contributor to Microsoft Phi-3 model training. He holds a CS PhD from UC Berkeley advised by Prof Ion Stoica. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://guanhuawang.github.io/ DeepSpeed hiring: https://www.microsoft.com/en-us/research/project/deepspeed/opportunities/ Large Model Training and Inference with DeepSpeed // Samyam Rajbhandari // LLMs in Prod Conference: https://youtu.be/cntxC3g22oU --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Guanhua on LinkedIn: https://www.linkedin.com/in/guanhua-wang/ Timestamps: [00:00] Guanhua's preferred coffee [00:17] Takeaways [01:36] Please like, share, leave a review, and subscribe to our MLOps channels! [01:47] Phi model explanation [06:29] Small Language Models optimization challenges [07:29] DeepSpeed overview and benefits [10:58] Crazy unimplemented crazy AI ideas [17:15] Post training vs QAT [19:44] Quantization over distillation [24:15] Using Lauras [27:04] LLM scaling sweet spot [28:28] Quantization techniques [32:38] Domino overview [38:02] Training performance benchmark [42:44] Data dependency-breaking strategies [49:14] Wrap up
Thanks to the High Signal Podcast by Delphina: https://go.mlops.community/HighSignalPodcast Aditya Naganath is an experienced investor currently working with Kleiner Perkins. He has a passion for connecting with people over coffee and discussing various topics related to tech, products, ideas, and markets. AI's Next Frontier // MLOps Podcast #277 with Aditya Naganath, Principal at Kleiner Perkins. // Abstract LLMs have ushered in an unmistakable supercycle in the world of technology. The low-hanging use cases have largely been picked off. The next frontier will be AI coworkers who sit alongside knowledge workers, doing work side by side. At the infrastructure level, one of the most important primitives invented by man - the data center, is being fundamentally rethought in this new wave. // Bio Aditya Naganath joined Kleiner Perkins’ investment team in 2022 with a focus on artificial intelligence, enterprise software applications, infrastructure and security. Prior to joining Kleiner Perkins, Aditya was a product manager at Google focusing on growth initiatives for the next billion users team. He previously was a technical lead at Palantir Technologies and formerly held software engineering roles at Twitter and Nextdoor, where he was a Kleiner Perkins fellow. Aditya earned a patent during his time at Twitter for a technical analytics product he co-created. Originally from Mumbai India, Aditya graduated magna cum laude from Columbia University with a bachelor’s degree in Computer Science, and an MBA from Stanford University. Outside of work, you can find him playing guitar with a hard rock band, competing in chess or on the squash courts, and fostering puppies. He is also an avid poker player. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Faith's Hymn by Beautiful Chorus: ⁠⁠https://open.spotify.com/track/1bDv6grQB5ohVFI8UDGvKK?si=4b00752eaa96413b⁠⁠ Substack: ⁠⁠https://adityanaganath.substack.com/?utm_source=substack&utm_medium=web&utm_campaign=substack_profile⁠⁠With thanks to the High Signal Podcast by Delphina: https://go.mlops.community/HighSignalPodcastBuilding the Future of AI in Software Development // Varun Mohan // MLOps Podcast #195 - ⁠⁠https://youtu.be/1DJKq8StuTo⁠⁠Do Re MI for Training Metrics: Start at the Beginning // Todd Underwood // AIQCON - ⁠⁠https://youtu.be/DxyOlRdCofo --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Aditya on LinkedIn: https://www.linkedin.com/in/aditya-naganath/
Dr Vincent Moens is an Applied Machine Learning Research Scientist at Meta and an author of TorchRL and TensorDict in Pytorch. PyTorch for Control Systems and Decision Making // MLOps Podcast #276 with Vincent Moens, Research Engineer at Meta. // Abstract PyTorch is widely adopted across the machine learning community for its flexibility and ease of use in applications such as computer vision and natural language processing. However, supporting reinforcement learning, decision-making, and control communities is equally crucial, as these fields drive innovation in areas like robotics, autonomous systems, and game-playing. This podcast explores the intersection of PyTorch and these fields, covering practical tips and tricks for working with PyTorch, an in-depth look at TorchRL, and discussions on debugging techniques, optimization strategies, and testing frameworks. By examining these topics, listeners will understand how to effectively use PyTorch for control systems and decision-making applications. // Bio Vincent Moens is a research engineer on the PyTorch core team at Meta, based in London. As the maintainer of TorchRL (https://github.com/pytorch/rl) and TensorDict (https://github.com/pytorch/tensordict), Vincent plays a key role in supporting the decision-making community within the PyTorch ecosystem. Alongside his technical role in the PyTorch community, Vincent also actively contributes to AI-related research projects. Before joining Meta, Vincent worked as an ML researcher at Huawei and AIG. Vincent holds a Medical Degree and a PhD in Computational Neuroscience. // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Musical recommendation: https://open.spotify.com/artist/1Uff91EOsvd99rtAupatMP?si=jVkoFiq8Tmq0fqK_OIEglg Website: github.com/vmoens TorchRL: https://github.com/pytorch/rl TensorDict: https://github.com/pytorch/tensordict LinkedIn post: https://www.linkedin.com/posts/vincent-moens-9bb91972_join-the-tensordict-discord-server-activity-7189297643322253312-Wo9J?utm_source=share&utm_medium=member_desktop --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Vincent on LinkedIn: https://www.linkedin.com/in/mvi/
Matt Van Itallie is the founder and CEO of Sema. Prior to this, they were the Vice President of Customer Support and Customer Operations at Social Solutions. AI-Driven Code: Navigating Due Diligence & Transparency in MLOps // MLOps Podcast #275 with Matt van Itallie, Founder and CEO of Sema. // Abstract Matt Van Itallie, founder and CEO of Sema, discusses how comprehensive codebase evaluations play a crucial role in MLOps and technical due diligence. He highlights the impact of Generative AI on code transparency and explains the Generative AI Bill of Materials (GBOM), which helps identify and manage risks in AI-generated code. This talk offers practical insights for technical and non-technical audiences, showing how proper diligence can enhance value and mitigate risks in machine learning operations. // Bio Matt Van Itallie is the Founder and CEO of Sema. He and his team have developed Comprehensive Codebase Scans, the most thorough and easily understandable assessment of a codebase and engineering organization. These scans are crucial for private equity and venture capital firms looking to make informed investment decisions. Sema has evaluated code within organizations that have a collective value of over $1 trillion. In 2023, Sema served 7 of the 9 largest global investors, along with market-leading strategic investors, private equity, and venture capital firms, providing them with critical insights. In addition, Sema is at the forefront of Generative AI Code Transparency, which measures how much code created by GenAI is in a codebase. They are the inventors behind the Generative AI Bill of Materials (GBOM), an essential resource for investors to understand and mitigate risks associated with AI-generated code. Before founding Sema, Matt was a Private Equity operating executive and a management consultant at McKinsey. He graduated from Harvard Law School and has had some interesting adventures, like hiking a third of the Appalachian Trail and biking from Boston to Seattle. Full bio: https://alistar.fm/bio/matt-van-itallie // MLOps Swag/Merch https://shop.mlops.community/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matt on LinkedIn: https://www.linkedin.com/in/mvi/
Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. // MLOps Podcast #274 with Michael Gschwind, Software Engineer, Software Executive at Meta Platforms. // Abstract Explore the role in boosting model performance, on-device AI processing, and collaborations with tech giants like ARM and Apple. Michael shares his journey from gaming console accelerators to AI, emphasizing the power of community and innovation in driving advancements. // Bio Dr. Michael Gschwind is a Director / Principal Engineer for PyTorch at Meta Platforms. At Meta, he led the rollout of GPU Inference for production services. He led the development of MultiRay and Textray, the first deployment of LLMs at a scale exceeding a trillion queries per day shortly after its rollout. He created the strategy and led the implementation of PyTorch donation optimization with Better Transformers and Accelerated Transformers, bringing Flash Attention, PT2 compilation, and ExecuTorch into the mainstream for LLMs and GenAI models. Most recently, he led the enablement of large language models on-device AI with mobile and edge devices. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://en.m.wikipedia.org/wiki/Michael_Gschwind --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Michael on LinkedIn: https://www.linkedin.com/in/michael-gschwind-3704222/?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app Timestamps: [00:00] Michael's preferred coffee [00:21] Takeaways [01:59] Please like, share, leave a review, and subscribe to our MLOps channels! [02:10] Gaming to AI Accelerators [11:34] Torch Chat goals [18:53] Pytorch benchmarking and competitiveness [21:28] Optimizing MLOps models [24:52] GPU optimization tips [29:36] Cloud vs On-device AI [38:22] Abstraction across devices [42:29] PyTorch developer experience [45:33] AI and MLOps-related antipatterns [48:33] When to optimize [53:26] Efficient edge AI models [56:57] Wrap up
//Abstract In this segment, the Panel will dive into the evolving landscape of AI, where large language models (LLMs) power the next wave of intelligent agents. In this engaging panel, leading investors Meera (Redpoint), George (Sequoia), and Sandeep (Prosus Ventures) discuss the promise and pitfalls of AI in production. From transformative industry applications to the challenges of scalability, costs, and shifting business models, this session unpacks the metrics and insights shaping GenAI's future. Whether you're excited about AI's potential or wary of its complexities, this is a must-watch for anyone exploring the cutting edge of tech investment. //Bio Host: Paul van der Boor Senior Director Data Science @ Prosus Group Sandeep Bakshi Head of Investments, Europe @ Prosus Meera Clark Principal @ Redpoint Ventures George Robson Partner @ Sequoia Capital A Prosus | MLOps Community Production
Luke Marsden, is a passionate technology leader. Experienced in consultant, CEO, CTO, tech lead, product, sales, and engineering roles. Proven ability to conceive and execute a product vision from strategy to implementation, while iterating on product-market fit. We Can All Be AI Engineers and We Can Do It with Open Source Models // MLOps Podcast #273 with Luke Marsden, CEO of HelixML. // Abstract In this podcast episode, Luke Marsden explores practical approaches to building Generative AI applications using open-source models and modern tools. Through real-world examples, Luke breaks down the key components of GenAI development, from model selection to knowledge and API integrations, while highlighting the data privacy advantages of open-source solutions. // Bio Hacker & entrepreneur. Founder at helix.ml. Career spanning DevOps, MLOps, and now LLMOps. Working on bringing business value to local, open-source LLMs. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://helix.ml About open source AI: https://blog.helix.ml/p/the-open-source-ai-revolution Ratatat Cream on Chrome: https://open.spotify.com/track/3s25iX3minD5jORW4KpANZ?si=719b715154f64a5f --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Luke on LinkedIn: https://www.linkedin.com/in/luke-marsden-71b3789/
//Abstract This panel speaks about the diverse landscape of AI agents, focusing on how they integrate voice interfaces, GUIs, and small language models to enhance user experiences. They'll also examine the roles of these agents in various industries, highlighting their impact on productivity, creativity, and user experience and how these empower developers to build better solutions while addressing challenges like ensuring consistent performance and reliability across different modalities when deploying AI agents in production. //Bio Host: Diego Oppenheimer Co-founder @ Guardrails AI Jazmia Henry Founder and CEO @ Iso AI Rogerio Bonatti Researcher @ Microsoft Julia Kroll Applied Engineer @ Deepgram Joshua Alphonse Director of Developer Relations @ PremAI A Prosus | MLOps Community Production
Lauren Kaplan is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. The Impact of UX Research in the AI Space // MLOps Podcast #272 with Lauren Kaplan, Sr UX Researcher. // Abstract In this MLOps Community podcast episode, Demetrios and UX researcher Lauren Kaplan explore how UX research can transform AI and ML projects by aligning insights with business goals and enhancing user and developer experiences. Kaplan emphasizes the importance of stakeholder alignment, proactive communication, and interdisciplinary collaboration, especially in adapting company culture post-pandemic. They discuss UX’s growing relevance in AI, challenges like bias, and the use of AI in research, underscoring the strategic value of UX in driving innovation and user satisfaction in tech. // Bio Lauren is a sociologist and writer. She earned her PhD in Sociology at Goethe University Frankfurt and worked as a researcher at the University of Oxford and UC Berkeley. Passionate about homelessness and Al, Lauren joined UCSF and later Meta. Lauren recently led UX research at a global Al chip startup and is currently seeking new opportunities to further her work in UX research and AI. At Meta, Lauren led UX research for 1) Privacy-Preserving ML and 2) PyTorch. Lauren has worked on NLP projects such as Word2Vec analysis of historical HIV/AIDS documents presented at TextXD, UC Berkeley 2019. Lauren is passionate about understanding technology and advocating for the people who create and consume Al. Lauren has published over 30 peer-reviewed research articles in domains including psychology, medicine, sociology, and more.” // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Podcast on AI UX https://open.substack.com/pub/aistudios/p/how-to-do-user-research-for-ai-products?r=7hrv8&utm_medium=ios 2024 State of AI Infra at Scale Research Report https://ai-infrastructure.org/wp-content/uploads/2024/03/The-State-of-AI-Infrastructure-at-Scale-2024.pdf Privacy-Preserving ML UX Public Article https://www.ttclabs.net/research/how-to-help-people-understand-privacy-enhancing-technologies Homelessness research and more: https://scholar.google.com/citations?user=24zqlwkAAAAJ&hl=en Agents in Production: https://home.mlops.community/public/events/aiagentsinprod Mk.gee Si (Bonus Track): https://open.spotify.com/track/1rukW2Wxnb3GGlY0uDWIWB?si=4d5b0987ad55444a --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Lauren on LinkedIn: https://www.linkedin.com/in/laurenmichellekaplan?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=ios_app
Dr. Petar Tsankov is a researcher and entrepreneur in the field of Computer Science and Artificial Intelligence (AI). EU AI Act - Navigating New Legislation // MLOps Podcast #271 with Petar Tsankov, Co-Founder and CEO of LatticeFlow AI. Big thanks to LatticeFlow for sponsoring this episode! // Abstract Dive into AI risk and compliance. Petar Tsankov, a leader in AI safety, talks about turning complex regulations into clear technical requirements and the importance of benchmarks in AI compliance, especially with the EU AI Act. We explore his work with big AI players and the EU on safer, compliant models, covering topics from multimodal AI to managing AI risks. He also shares insights on "Comply," an open-source tool for checking AI models against EU standards, making compliance simpler for AI developers. A must-listen for those tackling AI regulation and safety. // Bio Co-founder & CEO at LatticeFlow AI, building the world's first product enabling organizations to build performant, safe, and trustworthy AI systems. Before starting LatticeFlow AI, Petar was a senior researcher at ETH Zurich working on the security and reliability of modern systems, including deep learning models, smart contracts, and programmable networks. Petar have co-created multiple publicly available security and reliability systems that are regularly used: = ERAN, the world's first scalable verifier for deep neural networks: https://github.com/eth-sri/eran = VerX, the world's first fully automated verifier for smart contracts: https://verx.ch = Securify, the first scalable security scanner for Ethereum smart contracts: https://securify.ch = DeGuard, de-obfuscates Android binaries: http://apk-deguard.com = SyNET, the first scalable network-wide configuration synthesis tool: https://synet.ethz.ch Petar also co-founded ChainSecurity, an ETH spin-off that within 2 years became a leader in formal smart contract audits and was acquired by PwC Switzerland in 2020. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://latticeflow.ai/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Petar on LinkedIn: https://www.linkedin.com/in/petartsankov/
Bernie Wu is VP of Business Development for MemVerge. He has 25+ years of experience as a senior executive for data center hardware and software infrastructure companies including companies such as Conner/Seagate, Cheyenne Software, Trend Micro, FalconStor, Levyx, and MetalSoft. Boosting LLM/RAG Workflows & Scheduling w/ Composable Memory and Checkpointing // MLOps Podcast #270 with Bernie Wu, VP Strategic Partnerships/Business Development of MemVerge. // Abstract Limited memory capacity hinders the performance and potential of research and production environments utilizing Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) techniques. This discussion explores how leveraging industry-standard CXL memory can be configured as a secondary, composable memory tier to alleviate this constraint. We will highlight some recent work we’ve done in integrating of this novel class of memory into LLM/RAG/vector database frameworks and workflows. Disaggregated shared memory is envisioned to offer high performance, low latency caches for model/pipeline checkpoints of LLM models, KV caches during distributed inferencing, LORA adaptors, and in-process data for heterogeneous CPU/GPU workflows. We expect to showcase these types of use cases in the coming months. // Bio Bernie is VP of Strategic Partnerships/Business Development for MemVerge. His focus has been building partnerships in the AI/ML, Kubernetes, and CXL memory ecosystems. He has 25+ years of experience as a senior executive for data center hardware and software infrastructure companies including companies such as Conner/Seagate, Cheyenne Software, Trend Micro, FalconStor, Levyx, and MetalSoft. He is also on the Board of Directors for Cirrus Data Solutions. Bernie has a BS/MS in Engineering from UC Berkeley and an MBA from UCLA. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.memverge.com Accelerating Data Retrieval in Retrieval Augmentation Generation (RAG) Pipelines using CXL: https://memverge.com/accelerating-data-retrieval-in-rag-pipelines-using-cxl/ Do Re MI for Training Metrics: Start at the Beginning // Todd Underwood // AIQCON: https://youtu.be/DxyOlRdCofo Handling Multi-Terabyte LLM Checkpoints // Simon Karasik // MLOps Podcast #228: https://youtu.be/6MY-IgqiTpg Compute Express Link (CXL) FPGA IP: https://www.intel.com/content/www/us/en/products/details/fpga/intellectual-property/interface-protocols/cxl-ip.htmlUltra Ethernet Consortium: https://ultraethernet.org/ Unified Acceleration (UXL) Foundation: https://www.intel.com/content/www/us/en/developer/articles/news/unified-acceleration-uxl-foundation.html RoCE networks for distributed AI training at scale: https://engineering.fb.com/2024/08/05/data-center-engineering/roce-network-distributed-ai-training-at-scale/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Bernie on LinkedIn: https://www.linkedin.com/in/berniewu/ Timestamps: [00:00] Bernie's preferred coffee [00:11] Takeaways [01:37] First principles thinking focus [05:02] Memory Abundance Concept Discussion [06:45] Managing load spikes [09:38] GPU checkpointing challenges [16:29] Distributed memory problem solving [18:27] Composable and Virtual Memory [21:49] Interactive chat annotation [23:46] Memory elasticity in AI [27:33] GPU networking tests [29:12] GPU Scheduling workflow optimization [32:18] Kubernetes Extensions and Tools [37:14] GPU bottleneck analysis [42:04] Economical memory strategies [45:14] Elastic memory management strategies [47:57] Problem solving approach [50:15] AI infrastructure elasticity evolution [52:33] RDMA and RoCE explained [54:14] Wrap up
Gideon Mendels is the Chief Executive Officer at Comet, the leading solution for managing machine learning workflows. How to Systematically Test and Evaluate Your LLMs Apps // MLOps Podcast #269 with Gideon Mendels, CEO of Comet. // Abstract When building LLM Applications, Developers need to take a hybrid approach from both ML and SW Engineering best practices. They need to define eval metrics and track their entire experimentation to see what is and is not working. They also need to define comprehensive unit tests for their particular use-case so they can confidently check if their LLM App is ready to be deployed. // Bio Gideon Mendels is the CEO and co-founder of Comet, the leading solution for managing machine learning workflows from experimentation to production. He is a computer scientist, ML researcher and entrepreneur at his core. Before Comet, Gideon co-founded GroupWize, where they trained and deployed NLP models processing billions of chats. His journey with NLP and Speech Recognition models began at Columbia University and Google where he worked on hate speech and deception detection. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.comet.com/site/ All the Hard Stuff with LLMs in Product Development // Phillip Carter // MLOps Podcast #170: https://youtu.be/DZgXln3v85s Opik by Comet: https://www.comet.com/site/products/opik/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Gideon on LinkedIn: https://www.linkedin.com/in/gideon-mendels/ Timestamps: [00:00] Gideon's preferred coffee [00:17] Takeaways [01:50] A huge shout-out to Comet ML for sponsoring this episode! [02:09] Please like, share, leave a review, and subscribe to our MLOps channels! [03:30] Evaluation metrics in AI [06:55] LLM Evaluation in Practice [10:57] LLM testing methodologies [16:56] LLM as a judge [18:53] OPIC track function overview [20:33] Tracking user response value [26:32] Exploring AI metrics integration [29:05] Experiment tracking and LLMs [34:27] Micro Macro collaboration in AI [38:20] RAG Pipeline Reproducibility Snapshot [40:15] Collaborative experiment tracking [45:29] Feature flags in CI/CD [48:55] Labeling challenges and solutions [54:31] LLM output quality alerts [56:32] Anomaly detection in model outputs [1:01:07] Wrap up
Raj Rikhy is a Senior Product Manager at Microsoft AI + R, enabling deep reinforcement learning use cases for autonomous systems. Previously, Raj was the Group Technical Product Manager in the CDO for Data Science and Deep Learning at IBM. Prior to joining IBM, Raj has been working in product management for several years - at Bitnami, Appdirect and Salesforce. // MLOps Podcast #268 with Raj Rikhy, Principal Product Manager at Microsoft. // Abstract In this MLOps Community podcast, Demetrios chats with Raj Rikhy, Principal Product Manager at Microsoft, about deploying AI agents in production. They discuss starting with simple tools, setting clear success criteria, and deploying agents in controlled environments for better scaling. Raj highlights real-time uses like fraud detection and optimizing inference costs with LLMs, while stressing human oversight during early deployment to manage LLM randomness. The episode offers practical advice on deploying AI agents thoughtfully and efficiently, avoiding over-engineering, and integrating AI into everyday applications. // Bio Raj is a Senior Product Manager at Microsoft AI + R, enabling deep reinforcement learning use cases for autonomous systems. Previously, Raj was the Group Technical Product Manager in the CDO for Data Science and Deep Learning at IBM. Prior to joining IBM, Raj has been working in product management for several years - at Bitnami, Appdirect and Salesforce. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.microsoft.com/en-us/research/focus-area/ai-and-microsoft-research/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Raj on LinkedIn: https://www.linkedin.com/in/rajrikhy/
//Abstract If there is one thing that is true, it is data is constantly changing. How can we keep up with these changes? How can we make sure that every stakeholder has visibility? How can we create a culture of understanding around data change management? //Bio - Benjamin Rogojan: Data Science And Engineering Consultant @ Seattle Data Guy - Chad Sanderson: CEO & Co-Founder @ Gable - Christophe Blefari: CTO & Co-founder @ NAO - Maggie Hays: Founding Community Product Manager, DataHub @ Acryl Data A big thank you to our Premium Sponsors  @Databricks ,  @tecton8241 , &  @onehouseHQ for their generous support!
loading