DiscoverMLOps.community
MLOps.community
Claim Ownership

MLOps.community

Author: Demetrios Brinkmann

Subscribed: 216Played: 11,977
Share

Description

Weekly talks and fireside chats about everything that has to do with the new space emerging around DevOps for Machine Learning aka MLOps aka Machine Learning Operations.
354 Episodes
Reverse
// Abstract Enterprise AI leaders continue to explore the best productivity solutions that solve business problems, mitigate risks, and increase efficiency. Building reliable and secure AI/ML systems requires following industry standards, an operating framework, and best practices that can accelerate and streamline the scalable architecture that can produce expected business outcomes. This session, featuring veteran practitioners, focuses on building scalable, reliable, and quality AI and ML systems for the enterprises. // Panelists - Hira Dangol: VP, AI/ML and Automation @ Bank of America - Rama Akkiraju: VP, Enterprise AI/ML @ NVIDIA - Nitin Aggarwal: Head of AI Services @ Google - Steven Eliuk: VP, AI and Governance @ IBM A big thank you to our Premium Sponsors Google Cloud & Databricks for their generous support! Timestamps: 00:00 Panelists discuss vision and strategy in AI 05:18 Steven Eliuk, IBM expertise in data services 07:30 AI as means to improve business metrics 11:10 Key metrics in production systems: efficiency and revenue 13:50 Consistency in data standards aids data integration 17:47 Generative AI presents new data classification risks 22:47 Evaluating implications, monitoring, and validating use cases 26:41 Evaluating natural language answers for efficient production 29:10 Monitoring AI models for performance and ethics 31:14 AI metrics and user responsibility for future models 34:56 Access to data is improving, promising progress
Nik Suresh wrote an evisceration of the current AI hype boom called "I Will F**king Piledrive You If You Mention AI Again." AI Operations Without Fundamental Engineering Discipline // MLOps Podcast #250 with Nikhil Suresh, Director @ Hermit Tech. // Abstract Nik is on the podcast because of an anti-AI hype piece, so a reasonable thing to discuss is going to be what most companies are getting wrong when non-technical management wants to immediately roll out ML initiatives, but are unwilling to bring technical naysayers on board who will set them up for success. // Bio Nik is the author of ludic.mataroa.blog, who wrote "I Will [REDACTED] Piledriver You If You Mention AI Again", and mostly works in the data engineering and data science spaces. Nik's writing and company both focus on bringing more care to work, pushing back against the industry's worst excesses both technically and spiritually, and getting fundamentals right. Nik also has a reasonably strong background in psychology. His data science training was of the pre-LLM variety, circa. 2018, when there was a lot of hype but it wasn't this ridiculous. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://ludic.mataroa.blog/ Nik's blog: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/ Harnessing MLOps in Finance // Michelle Marie Conway // MLOps Podcast Coffee #174: https://youtu.be/nIEld_Q6L-0 Fundamentals of Data Engineering: Plan and Build Robust Data Systems AudiobookBy: Joe Reis, Matt Housley: https://audiobookstore.com/audiobooks/fundamentals-of-data-engineering.aspx Bullshit Jobs A Theory Hardcover by David Graeber: https://www.amazon.co.jp/-/en/David-Graeber/dp/0241263883 Does a Frog have Scorpion Nature podcast: https://open.spotify.com/show/57i8sYVqxG4i3NvBniLfhv --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Nik on LinkedIn: https://www.linkedin.com/in/nik-suresh/ Timestamps: [00:00] Nik's preferred coffee [00:30] Takeaways [01:40] Please like, share, leave a review, and subscribe to our MLOps channels! [01:56] AI hype and humor [07:21] Defining project success [08:57] Effective data utilization [12:18] AI Hype vs Data Engineering [14:44] AI implementation challenges [17:44 - 18:35] Data Engineering for AI and ML Virtual Conference Ad [18:35] Managing AI Expectations [22:08] AI expectations vs reality [26:00] Balancing Engineering and AI [31:54] Highlighting engineer success [35:25] The real challenges [36:30] Embracing work challenges [37:21] Dealing with podcast disappointments [40:50] Creating content for visibility [43:02] Exploring niche interests [44:14] Relationship building [47:15] Strategic approach to success [48:36] Wrap up
Eric Landry is a seasoned AI and Machine Learning leader with extensive expertise in software engineering and practical applications in NLP, document classification, and conversational AI. With technical proficiency in Java, Python, and key ML tools, he leads the Expedia Machine Learning Engineering Guild and has spoken at major conferences like Applied Intelligence 2023 and KDD 2020. AI in Healthcare // MLOps Podcast #249 with Eric Landry, CTO/CAIO @ Zeteo Health. // Abstract Eric Landry discusses the integration of AI in healthcare, highlighting use cases like patient engagement through chatbots and managing medical data. He addresses benchmarking and limiting hallucinations in LLMs, emphasizing privacy concerns and data localization. Landry maintains a hands-on approach to developing AI solutions and navigating the complexities of healthcare innovation. Despite necessary constraints, he underscores the potential for AI to proactively engage patients and improve health outcomes. // Bio Eric Landry is a technology veteran with 25+ years of experience in the healthcare, travel, and computer industries, specializing in machine learning engineering and AI-based solutions. Holding a Masters in SWE (NLP thesis topic) from the University of Texas at Austin, 2005. He has showcased his expertise and leadership in the field with three US patents, published articles on machine learning engineering, and speaking engagements at the 2023 Applied Intelligence Live, 2020 KDD conference, Data Science Salon 2024, and former leader of Expedia’s MLE guild. Formerly, Eric was the director of AI Engineering and Conversation Platform at Babylon Health and Expedia. Currently CTO/CAIO at Zeteo Health. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.zeteo.health/ Building Threat Detection Systems: An MLE's Perspective // Jeremy Jordan // MLOps Podcast #134: https://youtu.be/13nOmMJuiAo --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Eric on LinkedIn: https://www.linkedin.com/in/jeric-landry/ Timestamps: [00:00] Eric's preferred coffee [00:16] Takeaways [01:16] Please like, share, leave a review, and subscribe to our MLOps channels! [01:32] ML and AI in 2005 [04:43] Last job at Babylon Health [10:57] Data access solutions [14:35] Prioritize AI ML Team Success [16:39] Eric's current work [20:36] Engage in holistic help [22:13] High-stakes chatbots [27:30] Navigating Communication Across Diverse Communities [31:49] When Bots Go Wrong [34:15] Health care challenges ahead [36:05] Behavioral health tech challenges [39:45] Stress from Apps Notifications [41:11] Combining different guardrails tools [47:16] Navigating Privacy AI [50:12] Wrap up
Aniket Kumar Singh is a Vision Systems Engineer at Ultium Cells, skilled in Machine Learning and Deep Learning. I'm also engaged in AI research, focusing on Large Language Models (LLMs). Evaluating the Effectiveness of Large Language Models: Challenges and Insights // MLOps Podcast #248 with Aniket Kumar Singh, CTO @ MyEvaluationPal | ML Engineer @ Ultium Cells. // Abstract Dive into the world of Large Language Models (LLMs) like GPT-4. Why is it crucial to evaluate these models, how we measure their performance, and the common hurdles we face? Drawing from Aniket's research, he shares insights on the importance of prompt engineering and model selection. Aniket also discusses real-world applications in healthcare, economics, and education, and highlights future directions for improving LLMs. // Bio Aniket is a Vision Systems Engineer at Ultium Cells, skilled in Machine Learning and Deep Learning. I'm also engaged in AI research, focusing on Large Language Models (LLMs). // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: www.aniketsingh.me Aniket's AI Research for Good blog that I plan to utilize to share any new research that would focus on the good: www.airesearchforgood.org Aniket's papers: https://scholar.google.com/citations?user=XHxdWUMAAAAJ&hl=en --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Aniket on LinkedIn: https://www.linkedin.com/in/singh-k-aniket/ Timestamps: [00:00] Aniket's preferred coffee [00:14] Takeaways [01:29] Aniket's job and hobby [03:06] Evaluating LLMs: Systems-Level Perspective [05:55] Rule-based system [08:32] Evaluation Focus: Model Capabilities [13:04] LLM Confidence [13:56] Problems with LLM Ratings [17:17] Understanding AI Confidence Trends [18:28] Aniket's papers [20:40] Testing AI Awareness [24:36] Agent Architectures Overview [27:05] Leveraging LLMs for tasks [29:53] Closed systems in Decision-Making [31:28] Navigating model Agnosticism [33:47] Robust Pipeline vs Robust Prompt [34:40] Wrap up
Sophia Rowland is a Senior Product Manager focusing on ModelOps and MLOps at SAS. In her previous role as a data scientist, Sophia worked with dozens of organizations to solve a variety of problems using analytics. David Weik has a passion for data and creating integrated customer-centric solutions. Thinking data and people first to create value-added solutions. Extending AI: From Industry to Innovation // MLOps Podcast #246 with Sophia Rowland, Senior Product Manager and David Weik, Senior Solutions Architect of SAS. Huge thank you to SAS for sponsoring this episode. SAS - http://www.sas.com/ // Abstract Organizations worldwide invest hundreds of billions into AI, but they do not see a return on their investments until they are able to leverage their analytical assets and models to make better decisions. At SAS, we focus on optimizing every step of the Data and AI lifecycle to get high-performing models into a form and location where they drive analytically driven decisions. Join experts from SAS as they share learnings and best practices from implementing MLOps and LLMOPs at organizations across industries, around the globe, and using various types of models and deployments, from IoT CV problems to composite flows that feature LLMs. // Bio Sophia Rowland Sophia Rowland is a Senior Product Manager focusing on ModelOps and MLOps at SAS. In her previous role as a data scientist, Sophia worked with dozens of organizations to solve a variety of problems using analytics. As an active speaker and writer, Sophia has spoken at events like All Things Open, SAS Explore, and SAS Innovate as well as written dozens of blogs and articles. As a staunch North Carolinian, Sophia holds degrees from both UNC-Chapel Hill and Duke including bachelor’s degrees in computer science and psychology and a Master of Science in Quantitative Management: Business Analytics from the Fuqua School of Business. Outside of work, Sophia enjoys reading an eclectic assortment of books, hiking throughout North Carolina, and trying to stay upright while ice skating. David Weik David joined SAS in 2020 as a solutions architect. He helps customers to define and implement data-driven solutions. Previously, David was a SAS administrator/developer at a German insurance company working with the integration capabilities of SAS, Robotic Process Automation, and more. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links http://www.sas.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Sophia on LinkedIn: https://www.linkedin.com/in/sophia-rowland/ Connect with David on LinkedIn: https://www.linkedin.com/in/david-weik/ Timestamps: [00:00] Sophia & David's preferred coffee [00:19] Takeaways [02:11] Please like, share, leave a review, and subscribe to our MLOps channels! [02:55] Hands on MLOps and AI [05:14] Next-Gen MLOps Challenges [07:24] Data scientists adopting software [11:48] Taking a different approach [13:43] Zombie Model Management [16:36] Optimizing ML Revenue Allocation [18:39] Other use cases - Lockout - Tagout procedure [21:43] Vision Model Integration Challenges [26:16] Costly errors in predictive maintenance [27:25] Integration of Gen AI [34:32] Governance challenges in AI [38:00] Governance in Gen AI vs Governance with Traditional ML [41:53] Evaluation challenges in industries [46:49] Interface frustration with Chatbots [51:25] Implementing AI Agent's success [54:18] Usability challenges in interfaces [57:03] Themes in High-Performing AI Teams [1:00:51] Wrap up
Matar Haller is the VP of Data & AI at ActiveFence, where her teams own the end-to-end automated detection of harmful content at scale, regardless of the abuse area or media type. The work they do here is engaging, impactful, and tough, and Matar is grateful for the people she gets to do it with. AI For Good - Detecting Harmful Content at Scale // MLOps Podcast #245 with Matar Haller, VP of Data & AI at ActiveFence. // Abstract One of the biggest challenges facing online platforms today is detecting harmful content and malicious behavior. Platform abuse poses brand and legal risks, harms the user experience, and often represents a blurred line between online and offline harm. So how can online platforms tackle abuse in a world where bad actors are continuously changing their tactics and developing new ways to avoid detection? // Bio Matar Haller leads the Data & AI Group at ActiveFence, where her teams are responsible for the data, algorithms, and infrastructure that fuel ActiveFence’s ability to ingest, detect, and analyze harmful activity and malicious content at scale in an ever-changing, complex online landscape. Matar holds a Ph.D. in Neuroscience from the University of California at Berkeley, where she recorded and analyzed signals from electrodes surgically implanted in human brains. Matar is passionate about expanding leadership opportunities for women in STEM fields and has three children who surprise and inspire her every day. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links activefence.com https://www.youtube.com/@ActiveFence --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Matar on LinkedIn: https://www.linkedin.com/company/11682234/admin/feed/posts/ Timestamps: [00:00] Matar's preferred coffee [00:13] Takeaways [01:39] The talk that stood out [06:15] Online hate speech challenges [08:13] Evaluate harmful media API [09:58] Content moderation: AI models [11:36] Optimizing speed and accuracy [13:36] Cultural reference AI training [15:55] Functional Tests [20:05] Continuous adaptation of AI [26:43] AI detection concerns [29:12] Fine-Tuned vs Off-the-Shelf [32:04] Monitoring Transformer Model Hallucinations [34:08] Auditing process ensures accuracy [38:38] Testing strategies for ML [40:05] Modeling hate speech deployment [42:19] Improving production code quality [43:52] Finding balance in Moderation [47:23] Model's expertise: Cultural Sensitivity [50:26] Wrap up
Catherine Nelson is a freelance data scientist and writer. She is currently working on the forthcoming O’Reilly book "Software Engineering for Data Scientists”. Why All Data Scientists Should Learn Software Engineering Principles // MLOps podcast #245 with Catherine Nelson, a freelance Data Scientist. A big thank you to LatticeFlow AI for sponsoring this episode! LatticeFlow AI - https://latticeflow.ai/ // Abstract Data scientists have a reputation for writing bad code. This quote from Reddit sums up how many people feel: “It's honestly unbelievable and frustrating how many Data Scientists suck at writing good code.” But as data science projects grow, and because the job now often includes deploying ML models, it's increasingly important for DSs to learn fundamental SWE principles such as keeping your code modular, making sure your code is readable by other people and so on. The exploratory nature of DS projects means that you can't be sure where you will end up at the start of a project, but there's still a lot you can do to standardize the code you write. // Bio Catherine Nelson is the author of "Software Engineering for Data Scientists", a guide for data scientists who want to level up their coding skills, published by O'Reilly in May 2024. She is currently consulting for GenAI startups and providing mentorship and career coaching to data scientists. Previously, she was a Principal Data Scientist at SAP Concur. She has extensive experience deploying NLP models to production and evaluating ML systems, and she is also co-author of the book "Building Machine Learning Pipelines", published by O'Reilly in 2020. In her previous career as a geophysicist, she studied ancient volcanoes and explored for oil in Greenland. Catherine has a PhD in geophysics from Durham University and a Masters of Earth Sciences from Oxford University. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Software Engineering for Data Scientists book by Catherine Nelson: https://learning.oreilly.com/library/view/software-engineering-for/9781098136192/ https://www.amazon.com/Software-Engineering-Data-Scientists-Notebooks/dp/1098136209 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Catherine on LinkedIn: https://www.linkedin.com/in/catherinenelson1/ Timestamps: [00:00] Catherine's preferred coffee [00:15] Takeaways [02:38] Meeting magic: Embracing serenity [06:23] The Software Engineering for Data Scientists book [10:41] Exploring ideas rapidly [12:52] Bridging Data Science gaps [16:17] Data poisoning concerns [18:26] Transitioning from a data scientist to a machine learning engineer [21:53] Rapid Prototyping vs Thorough Development [23:45] Data scientists take ownership [25:53] Data scientists' role balance [30:30] Understanding system design process [36:00] Data scientists and Kubernetes [41:33 - 43:03] LatticeFlow AI Ad [43:05] The Future of Data Science [45:09] Data scientists analyzing models [46:46] Tools gaps in prompt tracking [50:44] Learnings from writing the book
Meta GenAI Infra Blog Review // Special MLOps Podcast episode by Demetrios. // Abstract Demetrios explores Meta's innovative infrastructure for large-scale AI operations, highlighting three blog posts on training large language models, maintaining AI capacity, and building Meta's GenAI infrastructure. The discussion reveals Meta's handling of hundreds of trillions of AI model executions daily, focusing on scalability, cost efficiency, and robust networking. Key elements include the Ops planner work orchestrator, safety protocols, and checkpointing challenges in AI training. Meta's efforts in hardware design, software solutions, and networking optimize GPU performance, with innovations like a custom Linux file system and advanced networking file systems like Hammerspace. The podcast also discusses advancements in PyTorch, network technologies like Roce and Nvidia's Quantum 2 Infiniband fabric, and Meta's commitment to open-source AGI. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Building Meta’s GenAI Infrastructure blog: https://engineering.fb.com/2024/03/12/data-center-engineering/building-metas-genai-infrastructure/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Timestamps: [00:00] Meta handles trillions of AI model executions [07:01] Meta creating AGI, ethical and sustainable [08:13] Concerns about energy use in training models [12:22] Network, hardware, and job optimization for reliability [17:21] Highlights of Arista and Nvidia hardware architecture [20:11] Meta's clusters optimized for efficient fabric [24:40] Varied steps, careful checkpointing in AI training [28:46] Meta is maintaining huge GPU clusters for AI [29:47] AI training is faster and more demanding [35:27] Ops planner orchestrates a million operations and reduces maintenance [37:15] Ops planner ensures safety and well-tested changes
Sean Wei, the CEO and co-founder of RealChar, shares his journey from working in the autonomous vehicle industry to creating an open-source voice assistant project called Realchar, which eventually evolved into Rivia, a voice AI assistant focused on managing personal phone calls. The Future of AI and Consumer Empowerment // MLOps podcast #244 with Shaun Wei, CEO & Co-Founder of RealChar. A big thank you to LatticeFlow for sponsoring this episode! LatticeFlow - https://latticeflow.ai/ // Abstract Explore the groundbreaking work RealChar is doing with its consumer application, Rivia. This discussion focuses on how Rivia leverages Generative AI and Traditional Machine Learning to handle mundane phone calls and customer service interactions, aiming to free up human time for more meaningful tasks. The product, currently in beta, embodies a forward-thinking approach to AI, where the technology offloads day-to-day burdens like scheduling appointments and making calls. // Bio Shaun Wei is a well-connected technology professional with a rich background in developing and analyzing artificial intelligence systems. In 2018, Shaun played a pivotal role in the advent and deployment of Google Duplex, a remarkable AI capable of handling natural conversations and performing tasks such as booking hair salon appointments and restaurant reservations via telephone. His involvement wasn't just limited to the developmental side; Shaun also uniquely positioned himself on the receiving end, gathering insights by interviewing users directly impacted by the technology. This dual perspective has enabled Shaun to grasp both the technical underpinnings and the human-centric applications of AI, making him a valuable asset in the tech industry. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links https://www.rivia.tech/ https://realchar.ai/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Shaun on LinkedIn: https://www.linkedin.com/in/shaunwei/ Timestamps: [00:00] Shaun's preferred coffee [00:28] Takeaways [03:30] Please like, share, leave a review, and subscribe to our MLOps channels! [03:57] AI in Production: Challenges & Insights [06:13] AI Scheduling and Assistance [08:00] Technical Challenges in AI [12:36 - 14:06] LatticeFlow Ad [14:09] Handling Challenges in AI [15:52] Learning driving and technical aspects [19:04] Self-Driving Cars: Multimodal Integration [23:41] Processing data with Transformers [26:46] Real-time phone data gathering [30:49] Real-time observability in AI [35:09] Time to first token [37:26] Preferred vs. Dynamic Model Selection [40:12] Event-driven architecture basics [42:06] Navigating challenges together [44:02] Challenges with Inconsistent Responses [45:40] Importance of product reliability [47:47] Training Data and Model Performance [50:02] Exploring AI in Customer Service [51:34] Navigating challenges in AI [53:15] Excited Launch Strategy Advice [57:10] Wrap up
Join us at our first in-person conference today all about AI Quality: https://www.aiqualityconference.com/ ML and AI as Distinct Control Systems in Heavy Industrial Settings // MLOps podcast #243 with Richard Howes, CTO of Metaformed. Richard Howes is a dedicated engineer who is passionate about control systems whether it be embedded systems, industrial automation, or AI/ML in a business application. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract How can we balance the need for safety, reliability, and robustness with the extreme pace of technology advancement in heavy industry? The key to unlocking the full potential of data will be to have a mixture of experts both from an AI and human perspective to validate anything from a simple KPI to a Generative AI Assistant guiding operators throughout their day. The data generated by heavy industries like agriculture, oil & gas, forestry, real estate, civil infrastructure, and manufacturing is underutilized and struggles to keep up with the latest and greatest - and for good reason. They provide the shelter we live and work in, the food we eat, and the energy to propel society forward. Compared to the pace of AI innovation they move slowly, have extreme consequences for failure, and typically involve a significant workforce. During this discussion, we will outline the data ready to be utilized by ML, AI, and data products in general as well as some considerations for creating new data products for these heavy industries. To account for complexity and uniqueness throughout the organization it is critical to engage operational staff, ensure safety is considered from all angles, and build adaptable ETL needed to bring the data to a usable state. // Bio Richard Howes is a dedicated engineer who is passionate about control systems whether it be embedded systems, industrial automation, or AI/ML in a business application. All of these systems require a robust control philosophy that outlines the system, its environment, and how the controller should function within it. Richard has a bachelor's of Electrical Engineering from the University of Victoria where he specialized in industrial automation and embedded systems. Richard is primarily focused on the heavy industrial sectors like energy generation, oil & gas, pulp/paper, forestry, real estate, and manufacturing. He works on both physical process control and business process optimization using the control philosophy principles as a guiding star. Richard has been working with industrial systems for over 10 years designing, commissioning, operating, and maintaining automated systems. For the last 5 years, Richard has been investing time into the data and data science-related disciplines bringing the physical process as close as possible to the business taking advantage of disparate data sets throughout the organization. Now with the age of AI upon us, he is focusing on integrating this technology safely, reliably, and with distinct organizational goals and ROI. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AWS Trainium: https://aws.amazon.com/machine-learning/trainium/ AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Richard on LinkedIn: https://www.linkedin.com/in/richardhowes/
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Accelerating Multimodal AI // MLOps podcast #241 with Ethan Rosenthal, Member of Technical Staff of Runway. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract We’re still trying to figure out systems and processes for training and serving “regular” machine learning models, and now we have multimodal AI to contend with! These new systems present unique challenges across the spectrum, from data management to efficient inference. I’ll talk about the similarities, differences, and challenges that I’ve seen by moving from tabular machine learning, to large language models, to generative video systems. I’ll also talk about the setups and tools that I have seen work best for supporting and accelerating both the research and productionization process. // Bio Ethan works at Runway building systems for media generation. Ethan's work generally straddles the boundary between research and engineering without falling too hard on either side. Prior to Runway, Ethan spent 4 years at Square. There, he led a small team of AI Engineers training large language models for Conversational AI. Before Square, Ethan freelance consulted and worked at a couple ecommerce startups. Ethan found his way into tech by way of a Physics PhD. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.ethanrosenthal.com Ethan's mangum opus: https://www.ethanrosenthal.com/2020/08/25/optimal-peanut-butter-and-banana-sandwiches/ Real-time Model Inference in a Video Streaming Environment // Brannon Dorsey // Coffee Sessions #98: https://youtu.be/TNO6rYwP3yg Feature Stores for Self-Service Machine Learning: https://www.ethanrosenthal.com/2021/02/03/feature-stores-self-service/ Gen-1: The Next Step Forward for Generative AI: https://research.runwayml.com/gen1 Machine Learning: The High Interest Credit Card of Technical Debt by D. Sculley et al.: https://research.google/pubs/machine-learning-the-high-interest-credit-card-of-technical-debt/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Ethan on LinkedIn: https://bsky.app/profile/ethanrosenthal.com Timestamps: [00:00] Ethan's preferred coffee [00:11] Takeaways [02:07] Falling into LLMs [03:16] Advanced AI Tech Capabilities [04:40] AI-powered video editing tool [06:56] Transition to AI: Diffusion Models [09:09] Multimodal Feature Store breakdown [15:33] Multimodal Feature Stores Evolution [18:09] Benefits of Multimodal Feature Store [25:09] Centralized Training Data Repository [27:33] Large-scale distributed training [32:37 - 33:39] AWS Ad [33:45] Dealing with researchers on productionizing [43:52] Infrastructure for Researchers and Engineers [47:04] Generative DevOps movement [49:21] Structuring teams [52:06] Multimodal Feature Stores Efficiency [54:02] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Navigating the AI Frontier: The Power of Synthetic Data and Agent Evaluations in LLM Development // MLOps podcast #241 with Boris Selitser, Co-Founder and CTO/CPO of Okareo. A big thank you to LatticeFlow for sponsoring this episode! LatticeFlow - https://latticeflow.ai/ // Abstract Explore the evolving landscape of building LLM applications, focusing on the critical roles of synthetic data and agent evaluations. Discover how synthetic data enhances model behavior description, prototyping, testing, and fine-tuning, driving robustness in LLM applications. Learn about the latest methods for evaluating complex agent-based systems, including RAG-based evaluations, dialog-level assessments, simulated user interactions, and adversarial models. This talk delves into the specific challenges developers face and the tradeoffs involved in each evaluation approach, providing practical insights for effective AI development. // Bio Boris is the Co-Founder and CTO/CPO at Okareo. Okareo is a full-cycle platform for developers to evaluate and customize AI/LLM applications. Before Okareo, Boris was Director of Product at Meta/Facebook, leading teams building internal platforms and ML products. Examples include a copyright classification system across the Facebook apps and an engagement platform for over 200K developers, 500K+ creators, and 12M+ Oculus users. Boris has a bachelor’s in Computer Science from UC Berkeley. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links https://docs.okareo.com/blog/data_loop https://docs.okareo.com/blog/agent_eval The Real E2E RAG Stack // Sam Bean // MLOps Podcast #217 - https://youtu.be/8uZst7pgOw0 RecSys at Spotify // Sanket Gupta // MLOps Podcast #232 - https://youtu.be/byH-ARJA4gk --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Boris on LinkedIn: https://www.linkedin.com/in/selitser/ Timestamps: [00:00] Boris' preferred coffee [00:37] Takeaways [02:32] Please like, share, leave a review, and subscribe to our MLOps channels! [02:48] Software Engineering and Data Science [06:01] AI Transformative Potential Explained [10:31] Prompt Injection Protection Strategies [17:03] Agent's metrics for Jira [24:11] Data and Metrics Evolution [27:54] Evaluation Focus Enhances Systems [31:22 - 32:52] LatticeFlow AD [32:55] Custom Evaluation and Synthetic Data [36:23] Synthetic data for expansion, evaluation, and map [41:06] Diverse agents' personalities for readiness [44:25] Agent functions [46:17] Optimizing Routing Agents [50:04] Adapting to tool output for decision-making [52:56] Agent framework evolution [55:41] Agent framework for delivering value [57:03] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ MLOps Coffee Sessions Special episode with LatticeFlow, How to Build Production-Ready AI Models for Manufacturing, fueled by our Premium Brand Partner, LatticeFlow. Deploying AI models in manufacturing involves navigating several technical challenges such as costly data acquisition, class imbalances, data shifts, leakage, and model degradation over time. How can you uncover the causes of model failures and prevent them effectively? This discussion covers practical solutions and advanced techniques to build resilient, safe, and high-performing AI systems in the manufacturing industry. // Bio Pavol Bielik Pavol earned his PhD at ETH Zurich, specializing in machine learning, symbolic AI, synthesis, and programming languages. His groundbreaking research earned him the prestigious Facebook Fellowship in 2017, representing the sole European recipient, along with the Romberg Grant in 2016. Following his doctorate, Pavol's passion for ensuring the safety and reliability of deep learning models led to the founding of LatticeFlow. Building on a more than a decade of research, Pavol and a dynamic team of researchers at LatticeFlow developed a platform that equips companies with the tools to deliver robust and high-performance AI models, utilizing automatic diagnosis and improvement of data and models. Aniket Singh Vision Systems Engineer AI Researcher Mohan Mahadevan Mohan Mahadevan is a seasoned technology leader with 25 years of experience in building computer vision (CV) and machine learning (ML) based products. Mohan has led teams to successfully deliver real world solutions spanning hardware, software, and AI based solutions in over 20 product families across a diverse range of domains, including Semiconductors, Robotics, Fintech, and Insuretech. Mohan Mahadevan has led global teams in the development of cutting-edge technologies across a range of disciplines including computer vision, machine learning, optical and hardware architectures, system design, computational optimization and more. Jürgen Weichenberger 20+ years of advanced analytics, data science, database design, architecture, and implementation on various platforms to solve Complex Industry Problems. Industrial Analytics is the fusion of manufacturing, production, reliability, integrity, quality, sales- and market-analytics and covering 10 Industries. By combining skills and experience, we are creating the next-generation AI & ML Solutions for our clients. Leveraging a unique formula which allows us to model some of the most challenging manufacturing problems while building, scaling, and enabling the end-user to leverage the next generation data products. The Strategy & Innoation Team at Schneider is specialising on Industrial-Grade Challenges where we are applying ML & AI methods to achieve state of the art results. Personally, I am driving my team and my own education to extend the limits of AI & ML beyond the current possible. I hold more than 15 patents and I am working on new innovations. I am working with our partner eco-system to enrich our accelerators with modern ML/AI techniques and integrating robotic equipment allows me to create next generation solutions. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://latticeflow.ai/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Timestamps: [00:00] Demetrios' Intro [00:48] Announcements [01:57] Join us at our first in-person conference on June 25 all about AI Quality! [03:39] Speakers' intros [06:00] AI ML uncommon use cases [10:14] Challenges in Implementing AI and ML in Heavy Industries [11:41] Optimizing AI use cases [18:07] Moving from PoC to Production [20:53] Hybrid AI Integration for Safety [28:28] Training AI for Defect Variability [33:18] Challenges in AI Integration [35:39] Metrics for Evaluating Success [37:27] Challenges in AI Integration [44:39] Usage of LLMs [50:34] Fine-tuning AI Models [53:20] Trust Dynamics: TML vs LLM [55:23] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Miguel Fierro is a Principal Data Science Manager at Microsoft and holds a PhD in robotics. From Robotics to Recommender Systems // MLOps Podcast #240 with Miguel Fierro, Principal Data Science Manager at Microsoft. Huge thank you to Zilliz for sponsoring this episode. Zilliz - https://zilliz.com/. // Abstract Miguel explains the limitations and considerations of applying ML in robotics, contrasting its use against traditional control methods that offer exactness, which ML approaches generally approximate. He discusses the integration of computer vision and machine learning in sports for player movement tracking and performance analysis, highlighting collaborations with European football clubs and the role of artificial intelligence in strategic game analysis, akin to a coach's perspective. // Bio Miguel Fierro is a Principal Data Science Manager at Microsoft Spain, where he helps customers solve business problems using artificial intelligence. Previously, he was CEO and founder of Samsamia Technologies, a company that created a visual search engine for fashion items allowing users to find products using images instead of words, and founder of the Robotics Society of Universidad Carlos III, which developed different projects related to UAVs, mobile robots, humanoid robots, and 3D printers. Miguel has also worked as a robotics scientist at Universidad Carlos III of Madrid (UC3M) and King’s College London (KCL) and has collaborated with other universities like Imperial College London and IE University in Madrid. Miguel is an Electrical Engineer by UC3M, PhD in robotics by UC3M in collaboration with KCL, and graduated from MIT Sloan School of Management. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://miguelgfierro.com GitHub: https://github.com/miguelgfierro/RecSys at Spotify // Sanket Gupta // MLOps Podcast #232 - https://youtu.be/byH-ARJA4gkRecommenders joins LF AI & Data as new Sandbox project: https://cloudblogs.microsoft.com/opensource/2023/10/10/recommenders-joins-lf-ai-data-as-new-sandbox-project/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Miguel on LinkedIn: https://www.linkedin.com/in/miguelgfierro/ Timestamps: [00:00] Miguel's preferred coffee [00:11] Takeaways [02:25] Robotics [10:44] Simpler solutions over ML [15:11] Robotics and Computer Vision [19:15] Basketball object detection [22:43 - 23:50] Zilliz Ad [23:51] Mr. Recommenders and Recommender systems' common patterns [31:35] Embeddings and Feature Stores [42:34] Experiment ROI for leadership [47:17] Hi ROI investments [51:13] LLMs in Recommender Systems [54:51] Wrap up
Uber's Michelangelo: Strategic AI Overhaul and Impact // MLOps podcast #239 with Demetrios Brinkmann. Huge thank you to Weights & Biases for sponsoring this episode. WandB Free Courses - http://wandb.me/courses_mlops // Abstract Uber's Michelangelo platform has evolved significantly through three major phases, enhancing its capabilities from basic ML predictions to sophisticated uses in deep learning and generative AI. Initially, Michelangelo 1.0 faced several challenges such as a lack of deep learning support and inadequate project tiering. To address these issues, Michelangelo 2.0 and subsequently 3.0 introduced improvements like support for Pytorch, enhanced model training, and integration of new technologies like Nvidia’s Triton and Kubernetes. The platform now includes advanced features such as a Genai gateway, robust compliance guardrails, and a system for monitoring model performance to streamline and secure AI operations at Uber. // Bio At the moment Demetrios is immersing himself in Machine Learning by interviewing experts from around the world in the weekly MLOps.community meetups. Demetrios constantly learns and engages in new activities to get uncomfortable and learn from his mistakes. He tries to bring creativity into every aspect of his life, whether analyzing the best paths forward, overcoming obstacles, or building Lego houses with his daughter. // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links From Predictive to Generative – How Michelangelo Accelerates Uber’s AI Journey blog post: https://www.uber.com/en-JP/blog/from-predictive-to-generative-ai/ Uber's Michelangelo: https://www.uber.com/en-JP/blog/michelangelo-machine-learning-platform/ The Future of Feature Stores and Platforms // Mike Del Balso & Josh Wills // MLOps Podcast # 186: https://youtu.be/p5F7v-w4EN0 Machine Learning Education at Uber // Melissa Barr & Michael Mui // MLOps Podcast #156: https://youtu.be/N6EbBUFVfO8 --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Timestamps: [00:00] Uber's Michelangelo platform evolution analyzed in podcast [03:51 - 4:50] Weights & Biases Ad [05:57] Uber creates Michelangelo to streamline machine learning [07:44] Michelangelo platform's tech and flexible system [11:49] Uber Michelangelo platform adapted for deep learning [16:48] Uber invests in ML training for employees [19:08] Explanation of blog content, ML quality metrics [22:38] Michelangelo 2.0 prioritizes serving latency and Kubernetes [26:30] GenAI gateway manages model routing and costs [31:35] ML platform evolution, legacy systems, and maintenance [33:22] Team debates maintaining outdated tools or moving on [34:41] Please like, share, leave feedback, and subscribe to our MLOps channels! [34:57] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Matthew McClean is a Machine Learning Technology Leader with the leading Amazon Web Services (AWS) cloud platform. He leads the customer engineering teams at Annapurna ML helping customers adopt AWS Trainium and Inferentia for their Gen AI workloads. Kamran Khan, Sr Technical Business Development Manager for AWS Inferentina/Trianium at AWS. He has over a decade of experience helping customers deploy and optimize deep learning training and inference workloads using AWS Inferentia and AWS Trainium. AWS Tranium and Inferentia // MLOps podcast #238 with Kamran Khan, BD, Annapurna ML and Matthew McClean, Annapurna Labs Lead Solution Architecture at AWS. Huge thank you to AWS for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract Unlock unparalleled performance and cost savings with AWS Trainium and Inferentia! These powerful AI accelerators offer MLOps community members enhanced availability, compute elasticity, and energy efficiency. Seamlessly integrate with PyTorch, JAX, and Hugging Face, and enjoy robust support from industry leaders like W&B, Anyscale, and Outerbounds. Perfectly compatible with AWS services like Amazon SageMaker, getting started has never been easier. Elevate your AI game with AWS Trainium and Inferentia! // Bio Kamran Khan Helping developers and users achieve their AI performance and cost goals for almost 2 decades. Matthew McClean Leads the Annapurna Labs Solution Architecture and Prototyping teams helping customers train and deploy their Generative AI models with AWS Trainium and AWS Inferentia // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links AWS Trainium: https://aws.amazon.com/machine-learning/trainium/ AWS Inferentia: https://aws.amazon.com/machine-learning/inferentia/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Kamran on LinkedIn: https://www.linkedin.com/in/kamranjk/ Connect with Matt on LinkedIn: https://www.linkedin.com/in/matthewmcclean/ Timestamps: [00:00] Matt's & Kamran's preferred coffee [00:53] Takeaways [01:57] Please like, share, leave a review, and subscribe to our MLOps channels! [02:22] AWS Trainium and Inferentia rundown [06:04] Inferentia vs GPUs: Comparison [11:20] Using Neuron for ML [15:54] Should Trainium and Inferentia go together? [18:15] ML Workflow Integration Overview [23:10] The Ec2 instance [24:55] Bedrock vs SageMaker [31:16] Shifting mindset toward open source in enterprise [35:50] Fine-tuning open-source models, reducing costs significantly [39:43] Model deployment cost can be reduced innovatively [43:49] Benefits of using Inferentia and Trainium [45:03] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/. Benjamin Wilms is a developer and software architect at heart, with 20 years of experience. He fell in love with chaos engineering. Benjamin now spreads his enthusiasm and new knowledge as a speaker and author – especially in the field of chaos and resilience engineering. Retrieval Augmented Generation // MLOps podcast #237 with Benjamin Wilms, CEO & Co-Founder of Steadybit. Huge thank you to Amazon Web Services for sponsoring this episode. AWS - https://aws.amazon.com/ // Abstract How to build reliable systems under unpredictable conditions with Chaos Engineering. // Bio Benjamin has over 20 years of experience as a developer and software architect. He fell in love with chaos engineering 7 years ago and shares his knowledge as a speaker and author. In October 2019, he founded the startup Steadybit with two friends, focusing on developers and teams embracing chaos engineering. He relaxes by mountain biking when he's not knee-deep in complex and distributed code. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://steadybit.com/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Benjamin on LinkedIn: https://www.linkedin.com/in/benjamin-wilms/ Timestamps: [00:00] Benjamin's preferred coffee [00:28] Takeaways [02:10] Please like, share, leave a review, and subscribe to our MLOps channels! [02:53] Chaos Engineering tldr [06:13] Complex Systems for smaller Startups [07:21] Chaos Engineering benefits [10:39] Data Chaos Engineering trend [15:29] Chaos Engineering vs ML Resilience [17:57 - 17:58] AWS Trainium and AWS Infecentia Ad [19:00] Chaos engineering tests system vulnerabilities and solutions [23:24] Data distribution issues across different time zones [27:07] Expertise is essential in fixing systems [31:01] Chaos engineering integrated into machine learning systems [32:25] Pre-CI/CD steps and automating experiments for deployments [36:53] Chaos engineering emphasizes tool over value [38:58] Strong integration into observability tools for repeatable experiments [45:30] Invaluable insights on chaos engineering [46:42] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Tom Smoker is the cofounder of an early stage tech company empowering developers to create knowledge graphs within their RAG pipelines. Tom is a technical founder, and owns the research and development of knowledge graphs tooling for the company. Managing Small Knowledge Graphs for Multi-agent Systems // MLOps podcast #236 with Tom Smoker, Technical Founder of whyhow.ai. A big thank you to  @latticeflow  for sponsoring this episode! LatticeFlow - https://latticeflow.ai/ // Abstract RAG is one of the more popular use cases for generative models, but there can be issues with repeatability and accuracy. This is especially applicable when it comes to using many agents within a pipeline, as the uncertainty propagates. For some multi-agent use cases, knowledge graphs can be used to structurally ground the agents and selectively improve the system to make it reliable end to end. // Bio Technical Founder of WhyHow.ai. Did Masters and PhD in CS, specializing in knowledge graphs, embeddings, and NLP. Worked as a data scientist to senior machine learning engineer at large resource companies and startups. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models: https://arxiv.org/abs/2401.01313Understanding the type of Knowledge Graph you need — Fixed vs Dynamic Schema/Data: https://medium.com/enterprise-rag/understanding-the-type-of-knowledge-graph-you-need-fixed-vs-dynamic-schema-data-13f319b27d9e --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Tom on LinkedIn: https://www.linkedin.com/in/thomassmoker/ Timestamps: [00:00] Tom's preferred coffee [00:33] Takeaways [03:04] Please like, share, leave a review, and subscribe to our MLOps channels! [03:23] Academic Curiosity and Knowledge Graphs [05:07] Logician [05:53] Knowledge graphs incorporated into RAGs [07:53] Graphs & Vectors Integration [10:49] "Exactly wrong" [12:14] Data Integration for Robust Knowledge Graph [14:53] Structured and Dynamic Data [21:44] Scoped Knowledge Retrieval Strategies [28:01 - 29:32] LatticeFlow Ad [29:33] RAG Limitations and Solutions [36:10] Working on multi agents, questioning agent definition [40:01] Concerns about performance of agent information transfer [43:45] Anticipating agent-based systems with modular processes [52:04] Balancing risk tolerance in company operations and control [54:11] Using AI to generate high-quality, efficient content [01:03:50] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ David Nunez, based in Santa Barbara, CA, US, is currently a Co-Founder and Partner at Abstract Group, bringing experience from previous roles at First Round Capital, Stripe, and Slab. Just when we Started to Solve Software Docs, AI Blew Everything Up // MLOps Podcast #235 with Dave Nunez, Partner of Abstract Group co-hosted by Jakub Czakon. Huge thank you to Zilliz for sponsoring this episode. Zilliz - https://zilliz.com/. // Abstract Over the previous decade, the recipe for making excellent software docs mostly converged on a set of core goals: Create high-quality, consistent content Use different content types depending on the task Make the docs easy to find For AI-focused software and products, the entire developer education playbook needs to be rewritten. // Bio Dave lives in Santa Barbara, CA with his wife and four kids. He started his tech career at various startups in Santa Barbara before moving to San Francisco to work at Salesforce. After Salesforce, he spent 2+ years at Uber and 5+ years at Stripe leading internal and external developer documentation efforts. In 2021, he co-authored Docs for Developers to help engineers become better writers. He's now a consultant, advisor, and angel investor for fast-growing startups. He typically invests in early-stage startups focusing on developer tools, productivity, and AI. He's a reading nerd, Lakers fan, and golf masochist. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Website: https://www.abstractgroup.co/ Book: docsfordevelopers.com About Dave: https://gamma.app/docs/Dave-Nunez-about-me-002doxb23qbblme?mode=doc https://review.firstround.com/investing-in-internal-documentation-a-brick-by-brick-guide-for-startups https://increment.com/documentation/why-investing-in-internal-docs-is-worth-it/ Writing to Learn paper by Peter Elbow: https://peterelbow.com/pdfs/Writing_for_Learning-Not_just_Demonstrating.PDF --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Dave on LinkedIn: https://www.linkedin.com/in/djnunez/ Connect with Kuba on LinkedIn: https://www.linkedin.com/in/jakub-czakon/?locale=en_US Timestamps: [00:00] Dave's preferred coffee [00:13] Introducing this episode's co-host, Kuba [00:36] Takeaways [02:55] Please like, share, leave a review, and subscribe to our MLOps channels! [03:23] Good docs, bad docs, and how to feel them [06:51] Inviting Dev docs and checks [10:36] Stripe's writing culture [12:42] Engineering team writing culture [14:15] Bottom-up tech writer change [18:31] Strip docs cult following [24:40] TriDocs Smart API Injection [26:42] User research for documentation [29:51] Design cues [32:15] Empathy-driven docs creation [34:28 - 35:35] Zilliz Ad [35:36] Foundational elements in documentation [38:23] Minimal infrastructure of information in "Read Me" [40:18] Measuring documentation with OKRs [43:58] Improve pages with Analytics [47:33] Google branded doc searches [48:35] Time to First Action [52:52] Dave's day in and day out and what excites him [56:01] Exciting internal documentation [59:55] Wrap up
Join us at our first in-person conference on June 25 all about AI Quality: https://www.aiqualityconference.com/ Cody Peterson has a diverse work experience in the field of product management and engineering. Cody is currently working as a Technical Product Manager at Voltron Data, starting from May 2023. Previously, they worked as a Product Manager at dbt Labs from July 2022 to March 2023. MLOps podcast #234 with Cody Peterson, Senior Technical Product Manager at Voltron Data | Ibis project // Open Standards Make MLOps Easier and Silos Harder. Huge thank you to Weights & Biases for sponsoring this episode. WandB Free Courses -http://wandb.me/courses_mlops // Abstract MLOps is fundamentally a discipline of people working together on a system with data and machine learning models. These systems are already built on open standards we may not notice -- Linux, git, scikit-learn, etc. -- but are increasingly hitting walls with respect to the size and velocity of data. Pandas, for instance, is the tool of choice for many Python data scientists -- but its scalability is a known issue. Many tools make the assumption of data that fits in memory, but most organizations have data that will never fit in a laptop. What approaches can we take? One emerging approach with the Ibis project (created by the creator of pandas, Wes McKinney) is to leverage existing "big" data systems to do the heavy lifting on a lightweight Python data frame interface. Alongside other open source standards like Apache Arrow, this can allow data systems to communicate with each other and users of these systems to learn a single data frame API that works across any of them. Open standards like Apache Arrow, Ibis, and more in the MLOps tech stack enable freedom for composable data systems, where components can be swapped out allowing engineers to use the right tool for the job to be done. It also helps avoid vendor lock-in and keep costs low. // Bio Cody is a Senior Technical Product Manager at Voltron Data, a next-generation data systems builder that recently launched an accelerator-native GPU query engine for petabyte-scale ETL called Theseus. While Theseus is proprietary, Voltron Data takes an open periphery approach -- it is built on and interfaces through open standards like Apache Arrow, Substrait, and Ibis. Cody focuses on the Ibis project, a portable Python dataframe library that aims to be the standard Python interface for any data system, including Theseus and over 20 other backends. Prior to Voltron Data, Cody was a product manager at dbt Labs focusing on the open source dbt Core and launching Python models (note: models is a confusing term here). Later, he led the Cloud Runtime team and drastically improved the efficiency of engineering execution and product outcomes. Cody started his carrer as a Product Manager at Microsoft working on Azure ML. He spent about 2 years on the dedicated MLOps product team, and 2 more years on various teams across the ML lifecycel including data, training, and inferencing. He is now passionate about using open source standards to break down the silos and challenges facing real world engineering teams, where engineering increasingly involves data and machine learning. // MLOps Jobs board https://mlops.pallet.xyz/jobs // MLOps Swag/Merch https://mlops-community.myshopify.com/ // Related Links Ibis Project: https://ibis-project.org Apache Arrow and the “10 Things I Hate About pandas”: https://wesmckinney.com/blog/apache-arrow-pandas-internals/ --------------- ✌️Connect With Us ✌️ ------------- Join our slack community: https://go.mlops.community/slack Follow us on Twitter: @mlopscommunity Sign up for the next meetup: https://go.mlops.community/register Catch all episodes, blogs, newsletters, and more: https://mlops.community/ Connect with Demetrios on LinkedIn: https://www.linkedin.com/in/dpbrinkm/ Connect with Cody on LinkedIn: https://linkedin.com/in/codydkdc
loading
Comments (2)

Marco Gorelli

"in Kaggle you normally see a 1-1 ratio of positive to negative examples" huh? has he ever done a Kaggle competition? this statement is totally off

Jul 27th
Reply (1)
loading