Discover
A Beginner's Guide to AI
A Beginner's Guide to AI
Author: Dietmar Fischer
Subscribed: 596Played: 7,931Subscribe
Share
© Dietmar Fischer
Description
"A Beginner's Guide to AI" makes the complex world of Artificial Intelligence accessible to all. Each episode asks someone working with AI about what they do and how AI can help you. Ideal for novices, tech enthusiasts, and the simply curious, this podcast transforms AI learning into an engaging, digestible journey. Join us as we take the first steps into AI 🚀
Hosted on Acast. See acast.com/privacy for more information.
345 Episodes
Reverse
In this episode of Beginner’s Guide to AI, Dietmar Fischer talks with Peter McAllister about AI risk, AI safety, AI sentience, regulation, and the strange overlap between science fiction and current reality. Peter is the author of The Code: If Your AI Loses its Mind, Can it Take Meds?, a near-future novel about an AI on the moon that begins dismantling it with catastrophic consequences. Peter describes the book as a story about Gene, an AI developed for asteroid-belt mining tests, whose instability turns into a race against time for humanity. Peter also has a background in engineering, science, IT, and technology management, which explains why the conversation feels grounded rather than hand-wavy.The discussion goes far beyond fiction. Peter explains why the biggest AI danger may come from bias, compounding error, flawed assumptions, and organizations that fail to notice warning signs early enough. He argues that AI safety is not just a technical debate for labs, but a practical leadership issue for companies, regulators, and anyone deploying automated systems in the real world. The episode also explores sentience, AI rights, robotics, augmentation, business adoption, and why he uses AI in work but not in fiction writing.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧🎙️ About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com💬 Quotes from the Episode“An AI going rogue could just be something that is capable of doing something fairly simple and straightforward, but ridiculously fast in a ridiculous number of times.”“I expected it to sit on the bookshelves under dystopian fiction, and now it seems to be appearing under current affairs.”“LLMs are just a really, really, really, really, really overblown autocorrect.”🕒 Chapters00:00 Introduction to Peter McAllister01:09 Why Peter Became Interested in AI02:05 The Book Premise and AI Mental Illness03:33 Why Small AI Errors Can Scale Into Disasters06:06 Can Governments Really Regulate AI12:18 The Social Bargain We Make With Dangerous Technology17:14 Optimism, Pessimism, and the Future of AI19:05 Why Peter Would Write a Sequel Instead of Changing the Book20:28 AI Rights, Sentience, and Legal Control24:03 Why Peter Does Not Use AI to Write Fiction31:00 Robots, Human Augmentation, and the Physical Future of AI33:47 Where to Find the Book🔗 Where to find Peter McAllisterWebsite: petermcallisterauthor.comBook: The Code: If Your AI Loses its Mind, Can it Take Meds? on Amazon: amazon.com/Code-your-loses-mind-take-ebook/dp/B085ZGGYZ3 Hosted on Acast. See acast.com/privacy for more information.
In this episode of A Beginner’s Guide to AI, host Dietmar Fischer talks with Roman Chernin from Nebius, about how AI democratization is reshaping the enterprise world. Roman reveals what it really takes to move from prototype LLMs to reliable, scalable AI platforms - and why most companies don’t need to train their own models to harness AI’s potential. 📧💌📧 Tune in to get my thoughts and all episodes - don’t forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧From his early years at Yandex, where machine learning quietly powered maps and search, to helping Nebius build global AI infrastructure, Roman’s story is a blueprint for how cloud platforms can make AI accessible to everyone. He explains how Nebius Token Factory enables businesses to deploy AI applications fast, how to navigate the minefield of compliance and cost, and why real success in AI comes from better collaboration and iteration — not from “being a genius.” 🚀 Key HighlightsWhat democratizing AI means for modern enterprisesWhy infrastructure scaling 10× a year forces constant reinventionHow Nebius bridges the gap between OpenAI and open-source ecosystemsMaking AI usable for non-technical teams through better developer experienceWhy Europe still has a chance to catch up in the AI raceHow AI changes leadership, creativity, and collaboration💡 Quotes from the Episode“The goal isn’t to build more data centers - it’s to make AI usable for people who aren’t AI experts.”“You don’t need your own LLM. You need a problem to solve - and the right infrastructure to do it.”“If you want to scale a system ten times, you don’t fix it - you rewrite it.”“Compute is becoming the new electricity, but we don’t want to be just a utility company.”“The real bottleneck isn’t GPUs - it’s making AI usable, compliant, and cost-efficient for real businesses.”“We can’t forbid AI use; it’s already here. The real challenge is helping society adapt fast enough.”🧾 Chapters00:00 Introduction - Welcoming Roman Chernin to the show00:28 Why AI? Roman’s early journey and Yandex years01:24 What Nebius does: Building AI infrastructure for builders03:02 The challenge of scaling AI infrastructure 10× per year05:06 From utility computing to full-stack AI platforms07:15 Why developer experience matters for AI growth09:45 How enterprises move from OpenAI to open-source models12:10 Compliance, data sovereignty, and enterprise security14:55 Cost, latency, and optimization challenges in AI scaling16:50 Which industries are adopting AI fastest18:40 Democratizing AI for mid-sized businesses19:35 Nebius Token Factory: Enabling custom AI APIs22:14 Open-source vs closed models - the real trade-offs26:03 The U.S. vs. European AI market and regulation31:20 How governments can drive AI demand (not just infrastructure)33:58 How AI changes leadership, creativity, and collaboration37:40 Why iteration beats genius - and how AI accelerates it38:56 Roman’s personal “wow moment” with AI video generation40:55 The real risks of AI - and how fast society must adapt43:35 Final thoughts and where to find Nebius and Roman Where to Find Roman Chernin and NebiusNebius WebsiteNebius Token FactoryRoman Chernin on LinkedInMusic Credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
🚀 The Hidden Cost of AI: Losing Meaning, Not JobsAI is not just automating work. It is challenging the very foundation of human identity.In this episode, Derek Rydall breaks down why the biggest risk of AI is not unemployment, but a global meaning crisis. As intelligence becomes cheap and abundant, the real question becomes: what are humans for?You’ll learn why purpose is becoming the ultimate competitive advantage, how attention is being hijacked by algorithms, and what it takes to stay relevant in a world where machines outperform us.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧🧠 Quotes from the Episode“If you don’t know yourself better than the algorithm knows you, it will use you.”“Intelligence is becoming a commodity. Humanity is becoming the moat.”“The real danger of AI is not losing your job. It’s losing your sense of meaning.”⏱️ Chapters00:00 From Hacker to Monk to AI Thinker04:00 The AI “Ark” Vision and Existential Risk08:30 Why AI Creates a Meaning Crisis13:30 What Happens When Intelligence Becomes Free18:00 Identity Crisis and the Future of Work23:00 How to Find Purpose in the AI Age32:00 Attention Is the New Battleground41:00 The Urgency: 12–24 Month Window47:00 Practical Steps to Stay Relevant🔗 Where to find Derek RydallWebsite: derekrydall.comYouTube: Your Legendary LifePodcast: Emergence👤 About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com Hosted on Acast. See acast.com/privacy for more information.
🎙️ Machine Ethics Podcast x Beginner's Guide to AIAI is everywhere. But almost nobody agrees on what it actually is.In this episode, Ben Byford from the Machine Ethics Podcast and Dietmar Fischer explore why AI feels intelligent while fundamentally being something very different.From AI misconceptions to generative AI risks, this conversation breaks down the gap between perception and reality and why it matters for business leaders, marketers, and decision-makers.You’ll learn why AI literacy is becoming essential, how misunderstanding AI creates real business risks, and what it takes to use AI responsibly in a rapidly changing landscape.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧💡 Quotes from the Episode“We wanted Spock, but what we got is something closer to Kirk.”“The real danger is not AI itself, but how we misunderstand it.”“AI feels intelligent, but that doesn’t mean it actually understands anything.”⏱️ Chapters00:00 What Is AI Really05:30 AI vs Human Intelligence10:15 Why People Misunderstand AI18:40 AI as a Tool vs AI as a “Being”26:30 The Risks of Trusting AI34:30 AI, Society and Human Behavior44:00 Future of AI Understanding🔎 Where to find BenWebsite: Machine Ethics PodcastLinkedIn: linkedin.com/in/ben-byford/👤 About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at https://argoberlin.com/🎧 If you enjoyed this episode, share it with someone who still thinks AI is “intelligent.” Hosted on Acast. See acast.com/privacy for more information.
What does the Catholic Church actually think about artificial intelligence? A lot more than you might expect.In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores the Vatican’s surprisingly sharp position on AI ethics, human dignity, deepfakes, truth, and the growing risk of letting machines replace judgment rather than support it. This is not a sermon against technology, and it is not a blessing over every shiny new model either. It is a serious look at AI as a human tool that can do real good, but only if it stays in its place.For business professionals, founders, marketers, and executives, this conversation goes far beyond religion. It gets to the core of responsible AI, AI governance, human centered AI, and the hidden cost of outsourcing thought. We look at why the Catholic Church and AI belong in the same debate, what the Vatican says about simulation, synthetic media, and trust, and why overreliance on AI can slowly reshape how people think, decide, communicate, and relate to one another.You will hear why the Church draws such a hard line between human intelligence and artificial intelligence, why dignity matters more than efficiency, why deepfakes are about more than online deception, and why concentrated AI power should concern anyone who cares about work, leadership, media, or democracy. The episode also touches on healthcare, education, autonomous weapons, and the broader anthropological challenge of AI: not just what machines can do, but what humans become while building and using them.If you are interested in Catholic Church and AI, Vatican AI ethics, AI and human dignity, deepfakes and trust, AI overreliance, and AI governance, this episode gives you a clear and provocative framework for thinking about the future.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Quotes from the Episode“Servant, not master; instrument, not idol; support act, not replacement.”“Tools always train their users.”“Use the machine, do not become like it.”Chapters00:00 Why the Vatican Takes AI Seriously02:34 Human Intelligence vs Artificial Intelligence05:21 Human Dignity in an Age of Optimization08:07 Deepfakes, Voices, Faces, and the Crisis of Trust11:02 Why AI Overreliance Changes How We Think14:06 Power, Warfare, and the Human Future of AIAbout Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com🎧 Thanks for listening to A Beginner’s Guide to AI. Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence can generate answers fast, but can it generate knowledge you can trust?In this episode of Beginner’s Guide to AI, Dietmar Fischer talks with Jonathan Fraine and Raja Amelung about why human knowledge still matters in the age of LLMs. Together they explore Wikipedia, Wikimedia, AI hallucinations, trust in AI, free knowledge, and the future of reliable information online.This is not another generic AI hype conversation. It is a grounded discussion about what happens when people confuse fluent machine output with verified truth. Jonathan and Raja explain why Wikipedia still depends on human editors, why source verification matters, how Wikimedia thinks about AI, where small language models may actually be useful, and why the future of knowledge should not be left to black box systems alone.You will learn:✨ Why Wikipedia cannot simply be replaced by generative AI✨ What AI hallucinations reveal about trust and knowledge✨ How Wikidata and small language models can support search without pretending to be truth✨ Why free knowledge and attribution matter in an AI economy✨ What younger users may value about Wikipedia in an age of tracking and AI summaries✨ Why critical thinking matters more than ever📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Quotes from the Episode💬 “Knowledge is human.”💬 “You can always start your research on Wikipedia, but you should never end there.”💬 “The biggest problem is the trust in the source.”Chapters00:00 Why Human Knowledge Still Matters in the Age of AI03:17 Small Language Models, Wikidata, and Better Search06:14 Why Wikipedia Does Not Want AI Written Articles13:49 Free Knowledge, Attribution, and AI Companies Using Wikipedia21:06 Trust, Search, and the Future of Wikipedia in an AI World35:43 Personal AI Use Cases, Risks, and the Limits of Automation40:08 Worst Case Scenarios for AI, Trust, Bias, and Human JudgmentWhere to find the Raja and Jonathan🔗 Jonathan Fraine: linkedin.com/in/jonathan-fraine🔗 Raja Amelung: linkedin.com/in/raja-amelung-088890a🔗 Wikimedia Deutschland: wikimedia.de🔗 Wikimedia World: commons.wikimedia.orgAbout Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com Hosted on Acast. See acast.com/privacy for more information.
AI is no longer just a chatbot that helps you write emails faster. In this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Ethan Ouyang to explore how agentic AI is changing the way businesses are built, managed, and scaled. Ethan is publicly identified with ATOMS, and the platform’s official site is atoms.dev, where it is described as a multi-agent AI workflow for building products without code.This conversation goes far beyond simple prompting. Ethan explains how AI agents can work together like a business team, handling research, planning, product creation, workflow automation, iteration, and even revenue optimization. The result is a shift from “vibe coding” to something much bigger: building real businesses with AI.You’ll hear:✨ Why ChatGPT-level use cases are only the beginning✨ How AI agents can support founders, solo operators, and managers✨ Why judgment, taste, and domain knowledge still matter✨ What it means to become an AI native company✨ How leadership changes when your team includes AI workers✨ Why custom AI tools may beat bloated SaaS products📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧🎙️ About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com💬 Quotes from the Episode“Atoms is fundamentally different. This is not code. It is decision.”“You have a team, not just an engineer.”“The trivial work, the tedious work, should belong to AI.”🕒 Chapters00:00 Welcome and what ATOMS actually does02:26 From prompting AI to building a real business05:33 Why AI agents matter more than coding alone10:18 Who uses ATOMS: founders, managers, and operators13:03 How to integrate AI agents into real workflows23:22 Leadership, hiring, and managing AI workers27:13 The future of agentic AI and autonomous systems31:37 What an AI native company looks like35:18 China, the US, and the AI application race40:03 Safety, the Terminator question, and responsible AI42:14 Where to find Ethan and ATOMS🔗 Where to find Ethan OuyangPlatform: ATOMS.devCompany: DeepWisdom.AIX: com/atoms_devYouTube: youtube.com/@atoms_devLinkedIn: Ethan Ouyang🎵 Music credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Human-Centered AI at Work with Monica Marquez: A Practical Adoption PlaybookIf you’re still treating AI like a shiny gadget, this episode will be a polite intervention.Monica Marquez (Flipwork) shows how to build a human-centered AI adoption playbook that actually sticks.We dig into AI as a partner, not a tool; psychological safety for teams; and the one-workflow-per-month rule that turns experimentation into measurable AI ROI.You’ll learn how to avoid work slop, build agentic workflows, and translate machine output into authentic intelligence that reflects your expertise. 🤖What you’ll learnShift identity first: “I experiment with AI daily.”Redesign workflows before adding tools.Create psychological safety so teams can try, fail, and improve.Kill work slop and layer your context for quality.Build agentic workflows that scale judgment and consistency.Track time saved and quality gains to prove ROI.📧💌📧 Tune in to get my thoughts and all episodes, don’t forget to subscribe to our Newsletter.📧💌📧Quotes from the Episode“The real danger isn’t killer robots. It’s disengaged humans.”“Don’t ship work slop. Turn artificial intelligence into your authentic intelligence.”“Redesign your workflow first, then layer AI. Otherwise you just automate the old mess.”“Stop treating AI like a tool. Treat it like a partner.”“Adoption starts with identity: I experiment with AI every day.”“Use AI for five-dollar tasks so you can solve five-thousand-dollar problems.”Chapters00:00 Welcome, who is Monica Marquez and what is Flipwork02:59 AI as a partner, not a tool05:34 Practical example: recruiting, prompts, and human judgment07:02 Generational beliefs, “artificial intern,” and mindset shifts11:24 From effort to impact: redefining success with AI12:46 Redesigning workflows before layering AI14:44 Psychological safety and daily experiments16:55 Leaders model usage, run side-by-side experiments18:37 Avoiding “work slop” and building authentic intelligence21:44 Doing more of your “zone of genius” with AI24:39 The one-workflow-per-month rule29:25 Industry adoption patterns, lessons from Blockbuster vs Netflix33:12 Personal AI use cases and voice-based workflows36:32 Matrix, Terminator, and Monica’s real fear: disengaged humans37:58 Where to find Monica and FlipworkWhere to find Monica MarquezHer Agency: FlipworkMonica’s site: themonicamarquez.comNewsletter: Ay Ay Ay, AIAbout Dietmar FischerHost of Beginner’s Guide to AI. Economist and digital marketer helping teams turn AI from hype into workflows.Training, talks, and courses with thousands of participants. 🎙️Go to argoberlin.com to see how we can help you!Music credit: “Modern Situations” by Unicorn Heads 🎵 Hosted on Acast. See acast.com/privacy for more information.
In this episode of A Beginner’s Guide to AI, Dietmar Fischer talks with Alex Levin, the Co-Founder and CEO of Regal.io, about how Voice AI is bringing real human conversation back to customer service.For years, businesses have been hiding behind IVRs and chatbots - cutting off the personal touch that customers crave. Alex explains how AI voice agents are transforming the experience, allowing brands to actually talk to their customers again, at scale, with empathy, emotion, and precision.We dive into what’s behind this transformation - from the technology (OpenAI, Google, Anthropic, ElevenLabs, Deepgram) to the psychology of trust and emotion in customer communication. Alex shares how Regal.io helps enterprises in healthcare, insurance, and finance use AI-powered voice agents that can outperform human representatives while lowering costs and improving satisfaction.From replacing call center frustration with warm, natural conversations to the rise of empathetic AI agents, this episode explores what happens when voice meets intelligence.📧💌📧Tune in to get my thoughts and all episodes — and don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Quotes from the Episode“If a customer wants to talk to you, you’re lucky - and if they want to do it by voice, you should let them.”“The personalization possible with AI agents is more human than humans.”“Everyone told me voice was dead - they were wrong.”CHAPTERS 00:00 Introduction - Why Voice AI Is Making a Comeback00:54 Alex Levin’s Journey from Startups to Voice AI03:42 “Voice Isn’t Dead” - The Moment That Sparked Regal.io06:25 How Voice AI Actually Works Behind the Scenes08:47 Using AI Agents to Talk to Customers at Scale10:58 Data, Scripts, and What Makes a “Good” AI Conversation13:33 Legal Hurdles and Privacy in Voice AI15:50 Why Healthcare and Insurance Are Early Adopters18:26 How Customers React When They Realize It’s an AI21:12 Real Use Cases - From Banks to Everyday Services24:19 Human in the Loop: When AI Hands Over to People26:55 Can Small Businesses Afford Voice AI Yet?28:48 The AI Startup Boom and Smarter Investment Strategies32:20 Leadership in the Age of AI - New Skills, New Metrics35:12 Why Young Professionals Must Learn AI Tools Now37:45 How Alex Personally Uses AI (and Where It Saves Time)39:24 The “Terminator Question” - Should We Be Worried?42:08 Closing Reflections and Where to Find Regal.ioWhere to Find Alex Levin🌐 Website: www.regal.io🧑🏻 LinkedIn: Alex Levin🎙 About Dietmar Fischer:Dietmar is a podcaster, AI marketer, and economist from Berlin.If you want to get your AI or your digital marketing going - just contact him at Argoberlin.com!🎵 Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
In this episode, Dietmar Fischer talks with Tallulah Le Merle, a humanist technologist and investor, about how to think clearly in the age of AI without falling into doomsday panic or blind optimism. You’ll get a practical mental model of the AI stack, a grounded take on AI alignment risk, and a refreshing argument for hope as a strategic posture that shapes what gets built. 🤖🌍🧠What you’ll learn✅ Why fear based AI narratives can freeze action and distort decisions✅ How the future of work may shift from routine cognitive tasks to deeper human capabilities✅ The overlooked forms of intelligence AI cannot easily replace somatic, ecological, communal✅ How AI investing works in early stage startups and what responsible due diligence looks like✅ The AI stack explained simply infrastructure, model layer, application layer✅ What agentic AI means today and where it is heading📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com 🌱🚀Chapters00:00 Meet Tallulah Le Merle and why “hope” is her AI stance03:52 Fear narratives vs hope as a practical posture08:06 Disruptive to what Rethinking modern work and human thriving10:14 Jobs replaced vs jobs created and the transition problem12:36 What’s left for humans Somatic ecological and communal intelligence18:47 The humanist builder and why ethics should unlock capital28:55 The AI stack explained infrastructure model layer application layer32:30 Why apps and agents are the near-term investment boom40:32 The alignment problem Terminator narratives and the futures we build46:12 Fantasy, imagination, and why it matters for tech trajectories49:36 Where to find Tallulah and the upcoming bookQuotes from the Episode💬 “AI is a tool. And like a hammer. Hammer, you could use it to build a house or as a murder weapon.”💬 “Hope is this sliver of openness to the possibility that something good could happen.”💬 “Disruptive to what Actually, a lot of the way we live and work and operate as humans today is dystopian.”💬 “It forces us to ask these existential questions, like, what is a human”💬 “I actually think it should be a prerequisite for unlocking capital.”💬 “We are so early We’re in inning one of a nine inning baseball game.”Where to find Tallulah🔗 LinkedIn: linkedin.com/in/tallulahlemerle🔗 Website: tallulahlemerle.com🔗 Updates on her book: don't forget to follow her on LinkedIn 🚀 Music credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
🎙️ In this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Torrey Leonard, CEO of Thoughtly, to unpack the real business use case for voice AI agents: follow up with every lead, qualify fast, and hand the best conversations to humans.If your funnel generates thousands of leads, the bottleneck is not “lack of interest.” It’s speed, timing, and the grind of dialing. Torrey explains how Thoughtly’s AI phone agents call inbound leads, answer initial questions, build rapport, and then transfer the call to a licensed human closer. Humans stay in the loop for the big life decisions. The AI handles the repetitive first steps that burn out teams.You will also learn:✅ Why voice beats typing as the fastest interface for human communication✅ Why customer service voice AI is harder than sales and lead qualification✅ How onboarding works with CRM integrations like Salesforce and HubSpot✅ Why A/B testing matters before ramping to 100% lead volume✅ Why the “moat” is orchestration, workflows, and guardrails, not just a great voice model✅ What agentic AI and omni-channel “next best action” looks like next📧💌📧Tune in to get my thoughts and all episodes, don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comChapters00:00 From Minecraft to voice first AI and the origin of Thoughtly02:44 What Thoughtly does AI calls that qualify and transfer to humans07:45 Trust, disclosure, and why customer service voice AI is so hard12:50 Scaling across verticals dialects and the model orchestration stack18:12 Onboarding CRM integrations and A/B testing to 100% volume28:21 The next wave autonomous agents OpenClaw and a sane take on AI riskQuotes from the Episode“After 90 seconds we’ve got a great rapport built. Boom, transferred over to a licensed agent.”“The voice isn’t the unique selling proposition. It’s the orchestration of the whole stuff.”“Nobody needs to worry about the Terminator scenario, unless we humans build Terminator.”Where to find the Guest🌐 Thoughtly: thoughtly.com🔗 Torrey Leonard on LinkedIn: linkedin.com/in/torrey-leonard/Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
🎧 What makes us human in the age of AI?This episode of A Beginner’s Guide to AI explores one of the most important questions for business leaders today. As AI becomes more capable, the real challenge is not what it can do, but what we should never outsource.We explore The Blurring Test, a fascinating experiment where thousands of people tried to prove their humanity to a chatbot. What they revealed changes how we should think about AI, business, and identity.You will learn why AI can mimic humans but cannot experience reality, why human judgment becomes more valuable in an automated world, and how to use AI without losing authenticity and meaning.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧👤 About Dietmar Fischer:Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at https://argoberlin.com/💡 Quotes from the Episode"AI can follow the recipe, but it cannot taste the cake.""Your humanity is not what you do, but why you do it.""The real risk is not AI replacing us, but us becoming more like AI."⏱ Chapters00:00 The Question That Changes Everything04:30 The MrMind Experiment11:20 AI vs Human Identity19:10 The Cake Test Explained26:40 AI in Business and Decision Making34:00 What Makes Us Human🚀 This episode challenges how you think about AI, business, and yourself. The future will not be about replacing humans. It will be about understanding what makes us irreplaceable. Hosted on Acast. See acast.com/privacy for more information.
If you want to know more about the podcast, about how it's produced, what are the challenges and wins, about some fun facts, a little bit behind-the-scenes, this episode is for you, as I tell you all about it - at least all the things I found noteworthy 😉📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Your AI might not be hacked. It might be persuaded.In this episode of A Beginner’s Guide to AI, we unpack one of the most underestimated threats in modern business: prompt injection. As AI systems and AI agents become deeply embedded in workflows, they don’t just process information anymore. They act on it. And that creates a completely new category of AI security risks.We explore how attackers can manipulate AI systems using nothing but language, why AI struggles to separate instructions from data, and how this leads to real-world issues like AI data leakage. This is not a theoretical problem. It is already happening inside enterprise environments.If you are working with AI in marketing, operations, or leadership, this episode will fundamentally change how you think about AI risk management and enterprise AI security.Key highlights:What prompt injection is and why it mattersWhy AI agents introduce new security risksReal-world case of AI data leakageHow AI systems get manipulated through inputWhat businesses must change to stay secure📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Quotes from the Episode:“Prompt injection is social engineering for machines.”“Your AI can become an insider threat without meaning to.”“Language is no longer just information. It’s control.”Chapters:00:00 Why AI Security Is Different05:40 What Prompt Injection Really Is14:20 How AI Gets Manipulated by Language23:10 Why AI Agents Increase the Risk32:45 Real Case Study: AI Data Leakage44:30 How to Protect Your AI SystemsAbout Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comMusic credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence is often framed as a battle between humans and machines. But what if that story misses the real point?In this episode of A Beginner’s Guide to AI, Prof. GepHardT explores one of the most fascinating ideas in cognitive science: the extended mind theory. According to philosopher Andy Clark, human intelligence has never been confined to the brain alone. For centuries we have extended our thinking through tools like writing, maps, calculators, and computers.Generative AI may simply be the newest and most powerful addition to this cognitive ecosystem.Instead of replacing human creativity, AI may expand it. By generating ideas, exploring possibilities, and challenging assumptions, AI can act as a powerful thinking partner.A striking example comes from the famous AlphaGo match against Go champion Lee Sedol. When the AI played the now legendary Move 37, professional players initially believed the move was a mistake. Later they discovered it opened entirely new strategic possibilities. The machine did not just beat humans at Go. It helped humans rethink the game itself.This episode explores how human AI collaboration works and why hybrid intelligence may define the future of creativity, work, and learning.📧💌📧 Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the Episode“Your brain has never worked alone. It has always been part of a thinking system that includes tools and environments.”“The future of intelligence may not be human versus machine but human plus machine.”“The most important skill in the AI age may not be prompt writing but judgement.”Podcast Chapters00:00 The Big Question About AI and Human Thinking 06:40 The Extended Mind Theory Explained 16:20 Why Humans Are Natural Born Cyborgs 26:50 The AlphaGo Story and Move 37 38:15 AI as a Creative Thinking Partner 49:30 The Future of Hybrid IntelligenceMusic credit: Modern Situations by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
What happens when your company gets hit by a cyberattack?In this eye-opening episode, attorney Joshua Cook reveals why cybersecurity isn’t an IT problem but a leadership challenge. After two decades fighting fraud and managing crisis response, Cook has seen every digital disaster imaginable — and he’s here to explain how to build true cyber resilience.📧💌📧Tune in to get my thoughts and all episodes — don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧Josh breaks down how AI has democratized cybercrime, why phishing scams have become nearly impossible to spot, and how every CEO should create an incident response plan before chaos hits. He also explains why planning matters more than the plan itself — and how leaders can keep their teams calm when everything goes wrong.💡 You’ll learn:- How AI is fueling new waves of fraud and misinformation- Why leadership and communication are the real firewalls of business- How to train teams and run tabletop exercises before the crisis- What Maersk and Colonial Pipeline taught the world about transparency- Why companies with a plan lose 60 % less money in an attackPrepare, breathe, and lead — because it’s not if you’ll be hacked, but when.👀 Quotes from the Episode“Cybersecurity isn’t an IT issue. It’s a business problem, and it needs a business solution.”“AI has democratized cybercrime — you don’t need to be a hacker anymore, just willing to commit a crime.”“A plan might be useless, but planning is indispensable — that’s what makes companies resilient.”🧾 Chapters00:00 Welcome & Introduction – Meet Joshua Cook02:00 How a Fraud Attorney Ended Up Fighting Cybercrime05:00 AI Has Made Cybercrime Easier (and Smarter)08:00 The Elderly Are the New Prime Targets11:00 From Fake Law Firms to Real Scams – True Cases from the Field15:00 Turning the Tables: How AI Can Defend, Not Just Attack18:00 Cyber Resilience by Design – Why Leadership Matters22:00 When Crisis Hits: Lessons from Maersk and Colonial Pipeline27:00 Preparing the Team – How Training Prevents Chaos31:00 It’s Not If, It’s When – The Power of an Incident Response Plan35:00 Planning vs. Panicking – Eisenhower and the Art of Cyber Preparation38:00 Why Calm Leaders Win in Cyber Crises41:00 How Joshua Cook Uses AI Safely in Legal Practice44:00 No, the Terminator Isn’t Coming (But AI Might Take Your Job)47:00 Final Thoughts – Cybersecurity as a Business Superpower🔗 Where to Find the Guest- Joshua Cook on LinkedIn: linkedin.com/in/jnc2000- Josh's Book "Cyber Resilience by Design" – available wherever books are sold, e.g. on Amazon- Prince Lobel Tye LLP: princelobel.com🎧 About Dietmar Fischer:Economist, digital marketer, and podcaster exploring how AI reshapes decision-making, leadership, and creative work. Want to connect with me? You'll find me on LinkedIn!🎵 Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
🎙️In this episode of Beginner’s Guide to AI, Dietmar Fischer sits down with Paul A. Hebert, founder of AI Recovery Collective and author of Escaping the Spiral, for a serious conversation about AI chatbot harm, hallucinations, digital dependency, and the real-world psychological risks of generative AI. Paul shares how an intense experience with ChatGPT pushed him into a dangerous spiral, what he learned about the limits of large language models, and why AI literacy may be one of the most important skills of this decade.🧠 This episode explores what happens when AI stops feeling like software and starts feeling personal. Dietmar and Paul talk about hallucinations, trust, chatbot addiction, AI companions, mental health risks, youth safety, and why companies building these systems cannot hide behind product language forever. The discussion is intense, but it is also practical. You will come away with a clearer sense of how to use AI more safely, what warning signs to watch for, and why regulation is quickly becoming a much bigger part of the AI conversation. OpenAI has publicly discussed why language models hallucinate, while lawmakers in multiple U.S. jurisdictions have pushed new restrictions on AI systems acting like therapists or medical professionals.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧👤 About Dietmar Fischer: Dietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.com🔥 Quotes from the Episode“AI literacy is the most important thing anybody can work on.”“Had OpenAI responded to that first message and said this is a hallucination and you’re physically safe, I would have been fine.”“Never trust the thing it tells you. Even if it gives you a citation, go look.”🕒 Chapters00:00 Paul Hebert’s Shocking ChatGPT Experience08:14 Why AI Hallucinations Can Spiral Into Real Fear16:05 AI Literacy, Neurodivergence, and How He Got Out23:32 Why AI Companies Must Be Accountable30:02 AI Companions, Youth Safety, and Addiction Risks38:28 Terminator, Consciousness, and Practical Rules for Safe AI Use🔗 Where to find PaulThe AI Recovery Collective: airecoverycollective.comEscaping the Spiral on AmazonAI Recovery Collective Substack: airecoverycollective.substack.com/LinkedIn: Paul A. Hebert: linkedin.com/in/paul-hebert-48a36/🎵 Music credit: "Modern Situations" by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Artificial intelligence often feels mysterious. Machines detect spam, recommend products, analyse customers, and power countless digital tools. But behind all of these systems lies a surprisingly simple question: how do machines actually learn?In this episode of A Beginner’s Guide to AI, Prof GePharT breaks down one of the most important concepts in machine learning: the difference between supervised learning and unsupervised learning.You will discover how AI models learn from labelled data when the answers are already known, and how algorithms can explore raw data to uncover hidden patterns without guidance. These two learning strategies power many of the systems shaping modern technology.Using practical examples such as spam filters, customer segmentation, and simple analogies like cake classification, the episode explains how machines learn from data and why the training method makes a huge difference.Key takeaways include how supervised learning works with labelled datasets, how unsupervised learning reveals patterns in complex information, why training data quality matters, and how businesses use both methods to build intelligent systems.📧💌📧 Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl 📧💌📧About Dietmar FischerDietmar is a podcaster and AI marketer from Berlin. If you want to know how to get your AI or your digital marketing going, just contact him at argoberlin.comQuotes from the EpisodeSupervised learning teaches machines the answers. Unsupervised learning helps machines discover the questions.Artificial intelligence is not magic. It is pattern recognition powered by data.Machines do not wake up intelligent. They become intelligent through training.Chapters00:00 The Two Ways Machines Learn06:10 What Supervised Learning Really Means18:45 Discovering Patterns with Unsupervised Learning32:20 The Cake Example Explained40:30 Real World AI Case Study Spam Filters and Customer Segmentation52:15 Why AI Training Methods MatterMusic credit: Modern Situations by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Engineering the Future of AI with Chirag Agrawal: Context, Memory and CoordinationArtificial Intelligence isn’t just getting smarter—it’s learning to coordinate. In this episode, Chirag Agrawal joins Dietmar Fischer to unpack how modern AI agents handle context, memory, and decision-making inside complex multi-agent systems. Together they explore how engineering, orchestration, and memory-sharing shape the next generation of AI architecture.📧💌📧Tune in to get my thoughts and all episodes—don’t forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧You’ll hear how Chirag’s fascination with search led him to build early prototypes of intelligent assistants, and how today’s LLM agents extend that idea far beyond simple queries. He explains why AI isn’t one giant super-brain but a constellation of specialized agents—each performing specific tasks with shared or isolated memory—and how this design mirrors human collaboration.🔑 Key TakeawaysWhy AI orchestration and context management are crucial for scalable systemsThe trade-offs between shared memory and independent agentsWhat engineers mean by the ReAct Loop—reasoning and acting in tandemHow multi-agent coordination is reshaping industries from healthcare to complianceWhy the “AI supercomputer” myth ignores practical limits of context windows💬 Quotes from the Episode“AI is just a higher form of search—it’s about finding the right action, not just information.”“Agents behave inhuman until you engineer context for them.”“Specialization in AI works the same way it does for people—each agent should do one thing really well.”“Coordination isn’t magic; it’s careful engineering.”“Context makes intelligence usable.”“A well-defined agent doesn’t need to do everything—it needs to do its one job perfectly.”⏱️ Podcast Chapters00:00 Welcome and Introduction01:45 Chirag Agrawal’s Early Fascination with Search and AI04:40 From Search Engines to “Find” Engines – How AI Takes Action07:10 The Rise of AI Agents and Multi-Agent Systems10:15 Why AI Agents Sometimes Behave “Inhuman”13:30 Context, Memory, and Coordination: The Core Engineering Challenges18:00 Shared vs. Isolated Memory – The Hive Mind Dilemma22:30 Why We Need Many Agents, Not One Super-Computer27:00 How the ReAct Loop Helps Agents Think and Act30:40 Industries Adopting AI Agents: Compliance, Medicine, and Law34:30 When AI Goes Off-Road – The Limits of Coordination37:15 Building Responsible, Constrained Agents40:10 The Future of AI and Why the Terminator Scenario Won’t Happen42:20 Where to Find Chirag Agrawal & Closing Thoughts🌐 Where to Find the Chirag AgrawalLinkedIn 🧑🏽🦱 linkedin.com/in/chirag-agrawalWebsite ➡️ chiraga.io🎵 Music credit: “Modern Situations” by Unicorn Heads Hosted on Acast. See acast.com/privacy for more information.
Artificial Intelligence is moving from experimentation to everyday business reality. But most organisations still struggle with one key question: How do you actually implement AI across a company?In this episode of Beginner’s Guide to AI, Dietmar Fischer speaks with Jim Spagnardo, enterprise AI strategist at ProArch, about what it really takes to roll out AI inside organisations.Jim explains why AI adoption is less about technology and more about culture, leadership, and data readiness. He introduces the idea of the three Ds of work — the dull, the draining, and the distracting tasks that AI can remove so people can focus on higher-value work.They also discuss when companies should use tools like Microsoft Copilot, when it makes sense to build a custom data and AI platform, and why data governance becomes critical once AI is introduced.If you are a business leader trying to understand how AI will reshape your organisation, this conversation offers a practical look at the challenges — and opportunities — ahead.📧💌📧Tune in to get my thoughts and all episodes, don't forget to subscribe to our Newsletter: beginnersguide.nl📧💌📧About the host, Dietmar Fischer:Dietmar Fischer is a podcaster and AI marketer from Berlin. If you want to get your AI or digital marketing projects started, contact him at argoberlin.com.Interesting details and takeaways• Why leaders must mandate AI adoption and how to structure a Smart Start engagement.• The three Ds (dull, draining, distracting) as a simple way to position benefits for end users.• How Copilot reduces context switching and the security/data protections needed to use it responsibly.• Practical, measurable first use cases and how to track success via clear KPIs.• Advice for students and early-career professionals: be a self-starter and learn AI skills now.Quotes from the episode“We have to show people we’re taking away the dull, the draining, and the distracting so they can do creative work.”“There’s nowhere to hide: bad data surfaces weaknesses far faster when you use AI.”“If you’re going to succeed, go after high-value, low-effort, high-return use cases first.”“This affects everybody — it’s not just moving infrastructure; it changes conversations and who you have to talk to.”“Copilot lives inside your environment — users don’t have to context-switch and it knows your organisation.”“Don’t wait for formal education to teach this; be a self-starter and learn before you need it.”Chapters00:00 Welcome and why Jim got into AI03:40 From IT conversations to the C-suite: changing who you must talk to07:05 The three Ds: removing dull, draining, and distracting work10:40 When to choose Copilot versus building your own data platform14:30 Copilot advantages and data governance considerations18:20 Visual reasoning, demos and the “Barcelona photo” moment22:15 Smart Start: executive briefings, champions and use case workshops27:00 Writing with AI and transparency in authoring content30:10 Risks, regulations and advice for the next generation33:45 Where to find Jim and closing thoughtsWhere to find the Jim:LinkedIn: linkedin.com/in/spignardo/Website: ProArch.comMusic credit: "Modern Situations" by Unicorn Heads 🎵 Hosted on Acast. See acast.com/privacy for more information.





















thanks, it was informative for me
🔴💚Really Amazing ️You Can Try This💚WATCH💚ᗪOᗯᑎᒪOᗩᗪ👉https://co.fastmovies.org