DiscoverThe Road to Accountable AI
The Road to Accountable AI

The Road to Accountable AI

Author: Kevin Werbach

Subscribed: 10Played: 45
Share

Description

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
55 Episodes
Reverse
Legendary entrepreneur and investor Mitch Kapor draws on his decades of experience to argue that while AI represents a massive wave of disruptive innovation, it also represents an opportunity to avoid mistakes made with social media and the early internet. In this episode, he contends that technologists tend toward over-optimism about technology solving human problems while underestimating downsides. Self-regulation by large AI companies like OpenAI and Anthropic is likely to fail, he suggests, because incentives to aggregate power and wealth are too strong, requiring external pressure and oversight. Kapor explains that his responsible investing approach at his venture capital firm, Kapor Capital, focuses on gap-closing rather than diversity for its own sake, funding startups that address structural inequalities in access, opportunity, or outcomes, regardless of founder demographics. He discusses the Humanity AI initiative and argues that philanthropy needs to develop AI literacy and technical capacity, with some foundations hiring chief technology officers to effectively engage with these issues. He believes targeted interventions can create meaningful change without matching the massive investments of the major AI labs. Kapor expresses hope that a younger generation of leaders in tech and philanthropy can step up to make positive differences, emphasizing that his generation should empower them rather than occupying seats at the table. Mitch Kapor is a pioneering technology entrepreneur, investor, and philanthropist who founded Lotus Development Corporation and created Lotus 1-2-3, the breakthrough spreadsheet software that helped establish the PC software industry in the 1980s. He co-founded the Electronic Frontier Foundation to advocate for digital rights and civil liberties, and later established Kapor Capital with his wife Freada Kapor Klein to invest in startups that close gaps of access, opportunity, and outcome for underrepresented communities. Kapor recently completed a masters degree at the MIT Sloan School focused on gap-closing investing, returning to finish what he started 45 years earlier when he left MIT to pursue his career in Silicon Valley. He serves on the steering committee of Humanity AI, a $500 million initiative to ensure AI benefits society broadly.
Former Congressman and Pentagon official Brad Carson discusses his organization, Americans for Responsible Innovation (ARI), which seeks to bridge the gap between immediate AI harms like and catastrophic safety risks, while bringing deep Capitol Hill expertise to the AI conversation . He argues that unlike previous innovations such as electricity or the automobile, AI has been deeply unpopular with the public from the start, creating a rare bipartisan alignment among those skeptical of its power and impacts. This creates openings for productive discussions about AI policy. Drawing on his military experience, Carson suggests that while AI will shorten the kill chain, it won't fundamentally change the human nature of warfare, and he warns against the US military's tendency to seek technical solutions to human problems . The conversation covers current policy debates, highlighting the necessity of regulating the design of models rather than just their deployment, and the importance of export controls to maintain the West's advantage in compute . Ultimately, Carson emphasizes that for AI to succeed politically, the "bounty" of this technology must be shared broadly to avoid tearing apart the social fabric Brad Carson is the founder and president of Americans for Responsible Innovation (ARI), an organization dedicated to lobbying for policy that ensures artificial intelligence benefits the public interest. A former Rhodes Scholar, Carson has had a diverse career in public service, having served as a U.S. Congressman from Oklahoma, the Undersecretary of the Army, and the acting Undersecretary of Defense for Personnel and Readiness . He also served as a university president and deployed to Iraq in 2008 . Transcript Former TU President Brad Carson Pushes for Strong AI Guardrails   
Oliver Patel has built a sizeable online following for his social media posts and Substack about enterprise AI governance, using clever acronyms and visual frameworks to distill down insights based on his experience at AstraZeneca, a major global pharmaceutical company. In this episode, he details his career journey from academic theory to government policy and now practical application, and offers insights for those new to the field. He argues that effective enterprise AI governance requires being pragmatic and picking your battles, since the role isn't to stop AI adoption but to enable organizations to adopt it safely and responsibly at speed and scale. He notes that core pillars of modern AI governance, such as AI literacy, risk classification, and maintaining an AI inventory, are incorporated into the EU AI Act and thus essential for compliance. Looking forward, Patel identifies AI democratization—how to govern AI when everyone in the workforce can use and build it—as the biggest hurdle, and offers thougths about how enteprises can respond. Oliver Patel is the Head of Enterprise AI Governance at AstraZeneca. Before moving into the corporate sector, he worked for the UK government as Head of Inbound Data Flows, where he focused on data policy and international data transfers, and was a researcher at University College London. He serves as an IAPP Faculty Member and a member of the OECD's Expert Group on AI Risk. His forthcoming book, Fundamentals of AI Governance, will be released in early 2026. Transcript Enterprise AI Governance Substack Top 10 Challenges for AI Governance Leaders in 2025 (Part 1)  Fundamentals of AI Governance book page  
Ravit Dotan argues that the primary barrier to accountable AI is not a lack of ethical clarity, but organizational roadblocks. While companies often understand what they should do, the real challenge is organizational dynamics that prevent execution—AI ethics has been shunted into separate teams lacking power and resources, with incentive structures that discourage engineers from raising concerns. Drawing on work with organizational psychologists, she emphasizes that frameworks prescribe what systems companies should have but ignore how to navigate organizational realities. The key insight: responsible AI can't be a separate compliance exercise but must be embedded organically into how people work. Ravit discusses a recent shift in her orientation from focusing solely on governance frameworks to teaching people how to use AI thoughtfully. She critiques "take-out mode" where users passively order finished outputs, which undermines skills and critical review. The solution isn't just better governance, but teaching workers how to incorporate responsible AI practices into their actual workflows.  Dr. Ravit Dotan is the founder and CEO of TechBetter, an AI ethics consulting firm, and Director of the Collaborative AI Responsibility (CAIR) Lab at the University of Pittsburgh. She holds a Ph.D. in Philosophy from UC Berkeley and has been named one of the "100 Brilliant Women in AI Ethics" (2023), and was a finalist for "Responsible AI Leader of the Year" (2025). Since 2021, she has consulted with tech companies, investors, and local governments on responsible AI. Her recent work emphasizes teaching people to use AI thoughtfully while maintaining their agency and skills. Her work has been featured in The New York Times, CNBC, Financial Times, and TechCrunch. Transcript My New Path in AI Ethics (October 2025) The Values Encoded in Machine Learning Research (FAccT 2022 Distinguished Paper Award) - Responsible AI Maturity Framework  
Kevin Werbach speaks with Trey Causey about the precarious state of the responsible AI (RAI) field. Causey argues that while the mission is critical, the current organizational structures for many RAI teams are struggling. He highlights a fundamental conflict between business objectives and governance intentions, compounded by the fact that RAI teams' successes (preventing harm) are often invisible, while their failures are highly visible. Causey makes the case that for RAI teams to be effective, they must possess deep technical competence to build solutions and gain credibility with engineering teams. He also explores the idea of "epistemic overreach," where RAI groups have been tasked with an impossibly broad mandate they lack the product-market fit to fulfill. Drawing on his experience in the highly regulated employment sector at Indeed, he details the rigorous, science-based approach his team took to defining and measuring bias, emphasizing the need to move beyond simple heuristics and partner with legal and product teams before analysis even begins. Trey Causey is a data scientist who most recently served as the Head of Responsible AI for Indeed. His background is in computational sociology, where he used natural language processing to answer social questions. Transcript Responsible Ai Is Dying. Long Live Responsible AI 
Kevin Werbach speaks with Caroline Louveaux, Chief Privacy, AI, and Data Responsibility Officer at Mastercard, about what it means to make trust mission critical in the age of artificial intelligence. Caroline shares how Mastercard built its AI governance program long before the current AI boom, grounding it in the company's Data and Technology Responsibility Principles". She explains how privacy-by-design practices evolved into a single global AI governance framework aligned with the EU AI Act, NIST AI Risk Management, and standards. The conversation explores how Mastercard balances innovation speed with risk management, automates low-risk assessments, and maintains executive oversight through its AI Governance Council. Caroline also discusses the company's work on agentic commerce, where autonomous AI agents can initiate payments, and why trust, certification, and transparency are essential for such systems to succeed. Caroline unpacks what it takes for a global organization to innovate responsibly — from cross-functional governance and "tone from the top," to partnerships like the Data & Trust Alliance and efforts to harmonize global standards. Caroline emphasizes that responsible AI is a shared responsibility and that companies that can "innovate fast, at scale, but also do so responsibly" will be the ones that thrive. Caroline Louveaux leads Mastercard's global privacy and data responsibility strategy. She has been instrumental in building Mastercard's AI governance framework and shaping global policy discussions on data and technology.  She serves on the board of the International Association of Privacy Professionals (IAPP), the WEF Task Force on Data Intermediaries, the ENISA Working Group on AI Cybersecurity, and the IEEE AI Systems Risk and Impact Executive Committee, among other activities. Transcript How Mastercard Uses AI Strategically: A Case Study (Forbes 2024) Lessons From a Pioneer: Mastercard's Experience of AI Governance (IMD, 2023) As AI Agents Gain Autonomy, Trust Becomes the New Currency. Mastercard Wants to Power Both. (Business Insider, July 2025)
Cameron Kerry, Distinguished Visiting Fellow at the Brookings Institution and former Acting US Secretary of Commerce, joins Kevin Werbach to explore the evolving landscape of AI governance, privacy, and global coordination. Kerry emphasizes the need for agile and networked approaches to AI regulation that reflect the technology's decentralized nature. He argues that effective oversight must be flexible enough to adapt to rapid innovation while grounded in clear baselines that can help organizations and governments learn together. Kerry revisits his long-standing push for comprehensive U.S. privacy legislation, lamenting the near-passage of the 2022 federal privacy bill that was derailed by partisan roadblocks. Despite setbacks, he remains hopeful that bottom-up experimentation and shared best practices can guide responsible AI use, even without sweeping laws.  Cameron F. Kerry is the Ann R. and Andrew H. Tisch Distinguished Visiting Fellow in Governance Studies at the Brookings Institution and a global thought leader on privacy, technology, and AI governance. He served as General Counsel and Acting Secretary of the U.S. Department of Commerce, where he led work on privacy frameworks and digital policy. A senior advisor to the Aspen Institute and board member of several policy initiatives, Kerry focuses on building transatlantic and global approaches to digital governance that balance innovation with accountability. Transcript What to Make of the Trump Administration's AI Action Plan (Brookings, July 31, 2025) Network Architecture for Global AI Policy (Brookings, February 10, 2025) How Privacy Legislation Can Help Address AI (Brookings, July 7, 2023)   
Carnegie Mellon business ethics professor Derek Leben joins Kevin Werbach to trace how AI ethics evolved from an early focus on embodied systems—industrial robots, drones, self-driving cars—to today's post-ChatGPT landscape that demands concrete, defensible recommendations for companies. Leben explains why fairness is now central: firms must decide which features are relevant to a task (e.g., lending or hiring) and reject those that are irrelevant—even if they're predictive. Drawing on philosophers such as John Rawls and Michael Sandel, he argues for objective judgments about a system's purpose and qualifications. Getting practical about testing for AI fairness, he distinguishes blunt outcome checks from better metrics, and highlights counterfactual tools that reveal whether a feature actually drives decisions. With regulations uncertain, he urges companies to treat ethics as navigation, not mere compliance: Make and explain principled choices (including how you mitigate models), accept that everything you do is controversial, and communicate trade-offs honestly to customers, investors, and regulators. In the end, Leben argues, we all must become ethicists to address the issues AI raises...whether we want to or not. Derek Leben is Associate Teaching Professor of Ethics at the Tepper School of Business, Carnegie Mellon University, where he teaches courses such as "Ethics of Emerging Technologies," "Fairness in Business," and "Ethics & AI."  Leben is the author of Ethics for Robots (Routledge, 2018) and AI Fairness (MIT Press, 2025).  He founded the consulting group Ethical Algorithms, through which he advises governments and corporations on how to build fair, socially responsible frameworks for AI and autonomous Transcript AI Fairness: Designing Equal Opportunity Algorithms (MIT Press 2025)  Ethics for Robots: How to Design a Moral Algorithm (Routledge 2019) The Ethical Challenges of AI Agents (Blog post, 2025)  
Kevin Werbach interviews Heather Domin, Global Head of the Office of Responsible AI and Governance at HCLTech. Domin reflects on her path into AI governance, including her pioneering work at IBM to establish foundational AI ethics practices. She discusses how the field has grown from a niche concern to a recognized profession, and the importance of building cross-functional teams that bring together technologists, lawyers, and compliance experts. Domin emphasizes the advances in governance tools, bias testing, and automation that are helping developers and organizations keep pace with rapidly evolving AI systems. She describes her role at HCLTech, where client-facing projects across multiple industries and jurisdictions create unique governance challenges that require balancing company standards with client-specific risk frameworks. Domin notes that while most executives acknowledge the importance of responsible AI, few feel prepared to operationalize it. She emphasizes the growing demand for proof and accountability from regulators and courts, and finds the work exciting for its urgency and global impact. She also talks about the new chalenges of agentic AI, and the potential for "oversight agents" that use AI to govern AI.  Heather Domin is Global Head of the Office of Responsible AI and Governance at HCLTech and co-chair of the IAPP AI Governance Professional Certification. A former leader of IBM's AI ethics initiatives, she has helped shape global standards and practices in responsible AI. Named one of the Top 100 Brilliant Women in AI Ethics™ 2025, her work has been featured in Stanford executive education and outlets including CNBC, AI Today, Management Today, Computer Weekly, AI Journal, and the California Management Review. Transcript  AI Governance in the Agentic Era Implementing Responsible AI in the Generative Age - Study Between HCL Tech and MIT
Kevin Werbach interviews Dean Ball, Senior Fellow at the Foundation for American Innovation and one of the key shapers of the Trump Administration's approach to AI policy. Ball reflects on his career path from writing and blogging to shaping federal policy, including his role as Senior Policy Advisor for AI and Emerging Technology at the White House Office of Science and Technology Policy, where he was the primary drafter of the Trump Administration's recent AI Action Plan. He explains how he has developed influence through a differentiated viewpoint: rejecting the notion that AI progress will plateau and emphasizing that transformative adoption is what will shape global competition. He critiques both the Biden administration's "AI Bill of Rights" approach, which he views as symbolic and wasteful, and the European Union's AI Act, which he argues imposes impossible compliance burdens on legacy software while failing to anticipate the generative AI revolution. By contrast, he describes the Trump administration's AI Action Plan as focused on pragmatic measures under three pillars: innovation, infrastructure, and international security. Looking forward, he stresses that U.S. competitiveness depends less on being first to frontier models than on enabling widespread deployment of AI across the economy and government. Finally, Ball frames tort liability as an inevitable and underappreciated force in AI governance, one that will challenge companies as AI systems move from providing information to taking actions on users' behalf. Dean Ball is a Senior Fellow at the Foundation for American Innovation, author of Hyperdimensional, and former Senior Policy Advisor at the White House OSTP. He has also held roles at the National Science Foundation, the Mercatus Center, and Fathom. His writing spans artificial intelligence, emerging technologies, bioengineering, infrastructure, public finance, and governance, with publications at institutions including Hoover, Carnegie, FAS, and American Compass. Transcript https://drive.google.com/file/d/1zLLOkndlN2UYuQe-9ZvZNLhiD3e2TPZS/view America's AI Action Plan Dean Ball's Hyperdimensional blog  
Kevin Werbach interviews David Hardoon, Global Head of AI Enablement at Standard Chartered Bank and former Chief Data Officer of the Monetary Authority of Singapore (MAS), about the evolving practice of responsible AI. Hardoon reflects on his perspective straddling both government and private-sector leadership roles, from designing the landmark FEAT principles at MAS to embedding AI enablement inside global financial institutions. Hardoon explains the importance of justifiability, a concept he sees as distinct from ethics or accountability. Organizations must not only justify their AI use to themselves, but also to regulators and, ultimately, the public. At Standard Chartered, he focuses on integrating AI safety and AI talent into one discipline, arguing that governance is not a compliance burden but a driver of innovation and resilience. In the era of generative AI and black-box models, he stresses the need to train people in inquiry--interrogating outputs, cross-referencing, and, above all, exercising judgment. Hardoon concludes by reframing governance as a strategic advantage: not a cost center, but a revenue enabler. By embedding trust and transparency, organizations can create sustainable value while navigating the uncertainties of rapidly evolving AI risks. David Hardoon is the Global Head of AI Enbablement at Standard Chartered Bank with over 23 years of experience in Data and AI across government, finance, academia, and industry. He was previously the first Chief Data Officer at the Monetary Authority of Singapore, and CEO of Aboitiz Data Innovation.  MAS Feat Principles on Repsonsible AI (2018) Veritas Initative – MAS-backed consortium applying FEAT principles in practice Can AI Save Us From the Losing War With Scammers? Perhaps (Business Times, 2024) Can Artificial Intelligence Be Moral?  (Business Times, 2021)
Kevin Werbach interviews Karine Perset, Acting Head of the OECD's AI and Emerging Technology Division, about the global effort to shape responsible AI. Perset explains how the OECD—an intergovernmental organization with 38 member countries—has become a central forum for governments to cooperate on complex, interdependent challenges like AI. Since launching its AI foresight forum in 2016, the OECD has spearheaded two cornerstone initiatives: the OECD Recommendation on AI, the first intergovernmental standard adopted in 2019, and OECD.AI, a policy observatory that tracks global trends, policies, and metrics. Perset highlights the organization's unique role in convening evidence-based dialogue across governments, experts, and stakeholders worldwide. She describes the challenge of reconciling diverse national approaches while developing common tools, like a global incident-reporting framework and over 250 indicators that measure AI maturity across investment, research, infrastructure, and workforce skills. She underscores both the urgency and the opportunity: AI systems are diffusing rapidly across all sectors, powered by common algorithms that create shared risks. Without aligned safeguards and interoperable standards, countries risk repeating one another's mistakes. Yet if governments can coordinate, share data responsibly, and support one another's policy development, AI can strengthen economic resilience, innovation, and public trust. Karine Perset is the Acting Head of the OECD AI and Emerging Digital Technologies Division, where she oversees the OECD.AI Policy Observatory, the Global Partnership on AI (GPAI) & integrated network of experts as well as the OECD Global Forum on Emerging Technologies. She oversees the development of analysis, policies and tools inline with the OECD AI Principles. She also helps governments manage the opportunities and challenges that AI and emerging technologies raise for governments. Previously she was Advisor to ICANN's Governmental Advisory Committee and Counsellor of the OECD's Science, Technology and Industry Director. OECD.ai   
Kevin Werbach interviews DJ Patil, the first U.S. Chief Data Scientist under the Obama Administration, about the evolving role of AI in government, healthcare, and business. Patil reflects on how the mission of government data leadership has grown more critical today: ensuring good data, using it responsibly, and unleashing its power for public benefit. He describes both the promise and the paralysis of today's "big data" era, where dashboards abound, but decision-making often stalls. He highlights the untapped potential of federal datasets, such as the VA's Million Veterans Project, which could accelerate cures for major diseases if unlocked. Yet funding gaps, bureaucratic resistance, and misalignment with Congress continue to stand in the way. Turning to AI, Patil describes a landscape of extraordinary progress: tools that help patients ask the right questions of their physicians, innovations that enhance customer service, and a wave of entrepreneurial energy transforming industries. At the same time, he raises alarms about inequitable access, job disruption, complacency in relying on imperfect systems, and the lack of guardrails to prevent harmful misuse. Rather than relentlessly stepping on the gas in the AI "race," he emphasizes, we need a steering wheel, in the form of public policy, to ensure that AI development serves the public good.  DJ Patil is an entrepreneur, investor, scientist, and public policy leader who served as the first U.S. Chief Data Scientist under the Obama Administration. He has held senior leadership roles at PayPal, eBay, LinkedIn, and Skype, and is currently a General Partner at Greylock Ventures. Patil is recognized as a pioneer in advancing the use of data science to drive innovation, inform policy, and create public benefit. Transcript Ethics of Data Science, Co-Authored by DJ Patil
Kevin Werbach interviews Kay Firth-Butterfield about how responsible AI has evolved from a niche concern to a global movement. As the world's first Chief AI Ethics Officer and former Head of AI at the World Economic Forum, Firth-Butterfield brings deep experience aligning AI with human values. She reflects on the early days of responsible AI—when the field was dominated by philosophical debates—to today, when regulation such as the European Union's AI Act is defining the rules of the road.. Firth-Butterfield highlights the growing trust gap in AI, warning that rapid deployment without safeguards is eroding public confidence. Drawing on her work with Fortune 500 firms and her own cancer journey, she argues for human-centered AI, especially in high-stakes areas like healthcare and law. She also underscores the equity issues tied to biased training data and lack of access in the Global South, noting that AI is now generating data based on historical biases. Despite these challenges, she remains optimistic and calls for greater focus on sustainability, access, and AI literacy across sectors. Kay Firth-Butterfield is the founder and CEO of Good Tech Advisory LLC. She was the world's first C-suite appointee in AI ethics and was the inaugural Head of AI and Machine Learning at the World Economic Forum from 2017 to 2023. A former judge and barrister, she advises governments and Fortune 500 companies on AI governance and remains affiliated with Doughty Street Chambers in the UK.  Transcript Kay Firth-Butterfield Is Shaping Responsible AI Governance (Time100 Impact Awards) Our Future with AI Hinges on Global Cooperation Building an Organizational Approach to Responsible AI Co-Existing with AI - Firth-Butterfield's Forthcoming Book
Kevin Werbach interviews Dale Cendali, one of the country's leading intellectual property (IP) attorneys, to discuss how courts are grappling with copyright questions in the age of generative AI. Over 30 lP awsuits already filed against major generative AI firms, and the outcomes may shape the future of AI as well as creative industries. While we couldn't discuss specifics of one of the most talked-about cases, Thomson Reuters v. ROSS -- because Cendali is litigating it on behalf of Thomson Reuters -- she drew on her decades of experience in IP law to provide an engaging look at the legal battlefield and the prospects for resolution.  Cendali breaks down the legal challenges around training AI on copyrighted materials—from books to images to music—and explains why these cases are unusually complex for copyright law. She discusses the recent US Copyright Office report on Generative AI training, what counts as infringement in AU outputs, and what is sufficient human authorship for copyirght protection of AI works. While precedent offers some guidance, Cendali notes that outcomes will depend heavily on the specific facts of each case. The conversation also touches on how well courts can adapt existing copyright law to these novel technologies, and the prospects for a legislative solution. Dale Cendali is a partner at Kirkland & Ellis, where she leads the firm's nationwide copyright, trademark, and internet law practice. She has been named one of the 25 Icons of IP Law and one of the 100 Most Influential Lawyers in America. She also serves as an advisor to the American Law Institute's Copyright Restatement project and sits on the Board of the International Trademark Association. Transcript Thompson Reuters Wins Key Fair Use Fight With AI Startup Dale Cendali - 2024 Law360 MVP Copyright Office Report on Generative AI Training
Kevin Werbach interviews Brenda Leong, Director of the AI division at boutique technology law firm ZwillGen, to explore how legal practitioners are adapting to the rapidly evolving landscape of artificial intelligence. Leong explains why meaningful AI audits require deep collaboration between lawyers and data scientists, arguing that legal systems have not kept pace with the speed and complexity of technological change. Drawing on her experience at Luminos.Law—one of the first AI-specialist law firms—she outlines how companies can leverage existing regulations, industry-specific expectations, and contextual risk assessments to build practical, responsible AI governance frameworks. Leong emphasizes that many organizations now treat AI oversight not just as a legal compliance issue, but as a critical business function. As AI tools become more deeply embedded in legal workflows and core operations, she highlights the growing need for cautious interpretation, technical fluency, and continuous adaptation within the legal field. Brenda Leong is Director of ZwillGen's AI Division, where she leads legal-technical collaboration on AI governance, risk management, and model audits. Formerly Managing Partner at Luminos.Law, she pioneered many of the audit practices now used at ZwillGen. She serves on the Advisory Board of the IAPP AI Center, teaches AI law at IE University, and previously led AI and ethics work at the Future of Privacy Forum.  Transcript   AI Audits: Who, When, How...Or Even If?   Why Red Teaming Matters Even More When AI Starts Setting Its Own Agenda      
Kevin Werbach interviews Shameek Kundu, Executive Director of AI Verify Foundation, to explore how organizations can ensure AI systems work reliably in real-world contexts. AI Verify, a government-backed nonprofit in Singapore, aims to build scalable, practical testing frameworks to support trustworthy AI adoption. Kundu emphasizes that testing should go beyond models to include entire applications, accounting for their specific environments, risks, and data quality. He draws on lessons from AI Verify's Global AI Assurance pilot, which matched real-world AI deployers—such as hospitals and banks—with specialized testing firms to develop context-aware testing practices. Kundu explains that the rise of generative AI and widespread model use has expanded risk and complexity, making traditional testing insufficient. Instead, companies must assess whether an AI system performs well in context, using tools like simulation, red teaming, and synthetic data generation, while still relying heavily on human oversight. As AI governance evolves from principles to implementation, Kundu makes a compelling case for technical testing as a backbone of trustworthy AI. Shameek Kundu is Executive Director of the AI Verify Foundation. He previously held senior roles at Standard Chartered Bank, including Group Chief Data Officer and Chief Innovation Officer, and co-founded a startup focused on testing AI systems. Kundu has served on the Bank of England's AI Forum, Singapore's FEAT Committee, the Advisory Council on Data and AI Ethics, and the Global Partnership on AI.   Transcript AI Verify Foundation Findings from the Global AI Assurance Pilot Starter Kit for Safety Testing of LLM-Based Applications  
Host Kevin Werbach interviews Uthman Ali, Global Responsible AI Officer at BP, to delve into the complexities of implementing responsible AI practices within a global energy company. Ali emphasizes how the culture of safety in the industry influences BP's willingness to engage in AI governance. He discusses the necessity of embedding ethical AI principles across all levels of the organization, emphasizing tailored training programs for various employee roles—from casual AI users to data scientists—to ensure a comprehensive understanding of AI's ethical implications. He also highlights the importance of proactive governance, advocating for the development of ethical policies and procedures that address emerging technologies such as robotics and wearables. Ali's approach underscores the balance between innovation and ethical responsibility, aiming to foster an environment where AI advancements align with societal values and regulatory standards. Uthman Ali is BP's first Global Responsible AI Officer, and has been instrumental in establishing the company's Digital Ethics Center of Excellence. He advises prominent organizations such as the World Economic Forum and the British Standards Institute on AI governance and ethics. Additionally, Ali contributes to research and policy discussions as an advisor to Oxford University's Oxethica spinout and various AI safety institutes.   Transcript Prioritizing People and Planet as the Metrics for Responsible AI (IEEE Standards Association) Robocops and Superhumans: Dilemmas of Frontier Technology (2024 podcast interview)
  Kevin Werbach interviews journalist and author Karen Hao about her new book Empire of AI, which chronicles the rise of OpenAI and the broader implications of generative artificial intelligence. Hao reflects on how the ethical challenges of AI have evolved, noting the shift from concerns like data privacy and algorithmic bias to more complex issues such as intellectual property violations, environmental impact, misleading user experiences, and concentration of power. She emphasizes that while some technical solutions exist, they are rarely implemented by developers, and foundational harms often occur before tools reach end users. Hao argues that OpenAI's trajectory was not inevitable but instead the result of specific ideological beliefs, aggressive scaling decisions, and CEO Sam Altman's singular fundraising prowess. She critiques the "pseudo-religious" ideologies underpinning Silicon Valley's AI push, where utopian and doomer narratives coexist to justify rapid development. Hao outlines a more democratic alternative focused on smaller, task-specific models and stronger regulation to redirect AI's future trajectory. Karen Hao has written about AI for publications such as The Atlantic, The Wall Street Journal, and MIT Tchnology Review. She was the first journalist to ever profile OpenAI, and leads The AI Spotlight Series, a program with the Pulitzer Center that trains thousands of journalists around the world on how to cover AI. She has also been a fellow with the Harvard Technology and Public Purpose program, the MIT Knight Science Journalism program, and the Pulitzer Center's AI Accountability Network. She won an American Humanist Media Award in 2024, and an American National Magazine Award in 2022. Transcript Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI Inside the Chaos at OpenAI (The Atlantic, 2023) Cleaning Up ChatGPT Takes Heavy Toll on Human Workers (Wall St. Journal, 2023) The New AI Panic (The Atlantic, 2023) The Messy, Secretive Reality Behind OpenAI's Bid to Save the World (MIT Technology Review, 2020)
AI companion applications, which create interactive personas for one-on-one conversations, are incredibly popular. However, they raise a number of challenging ethical, legal, and psychological questions. In this episode, Kevin Werbach speaks with researcher Jaime Banks about how users view their conversations with AI companions, and the implications for governance. Banks shares insights from her research on mind-perception, and how AI companion users engage in a willing suspension of disbelief similar to watching a movie. She highlights both potential benefits and dangers, as well as novel issues such as the real feelings of loss users may experience when a companion app shuts down. Banks advocates for data-driven policy approaches rather than moral panic, suggesting responses such as an "AI user's Bill of Rights" for these services.   Jaime Banks is Katchmar-Wilhelm Endowed Professor at the School of Information Studies at Syracuse University. Her research examines human-technological interaction, including social AI, social robots, and videogame avatars. She focuses on relational construals of mind and morality, communication processes, and how media shape our understanding of complex technologies. Her current funded work focuses on social cognition in human-AI companionship and on the effects of humanizing language on moral judgments about AI. Transcript 'She Helps Cheer Me Up': The People Forming Relationships With AI Chatbots (The Guardian, April 2025) Can AI Be Blamed for a Teen's Suicide? (NY Times, October 2024) Beyond ChatGPT: AI Companions and the Human Side of AI (Syracuse iSchool video)
loading
Comments