DiscoverThe Road to Accountable AI
The Road to Accountable AI

The Road to Accountable AI

Author: Kevin Werbach

Subscribed: 5Played: 22
Share

Description

Artificial intelligence is changing business, and the world. How can you navigate through the hype to understand AI's true potential, and the ways it can be implemented effectively, responsibly, and safely? Wharton Professor and Chair of Legal Studies and Business Ethics Kevin Werbach has analyzed emerging technologies for thirty years, and created one of the first business school course on legal and ethical considerations of AI in 2016. He interviews the experts and executives building accountable AI systems in the real world, today.
17 Episodes
Reverse
Join Kevin and Nuala as they discuss Walmart's approach to AI governance, emphasizing the application of existing corporate principles to new technologies. She explains the Walmart Responsible AI Pledge, its collaborative creation process, and the importance of continuous monitoring to ensure AI tools align with corporate values. Nuala reveals her  commitment to responsible AI with a focus on customer centricity at Walmart with the mantra “Inform, Educate, Entertain” and examples like the "Ask Sam" tool that aids associates. They address the complexities of AI implementation, including bias, accuracy, and trust, and the challenges of standardizing AI frameworks. Kevin and Nuala conclude with reflections on the need for humility and agility in the evolving AI landscape, emphasizing the ongoing responsibility of technology providers to ensure positive impacts. Nuala O’Connor is the SVP and chief counsel, digital citizenship, at Walmart. Nuala leads the company’s Digital Citizenship organization, which advances the ethical use of data and responsible use of technology. Before joining Walmart, Nuala served as president and CEO of the Center for Democracy and Technology. In the private sector, Nuala has served in a variety of privacy leadership and legal counsel roles at Amazon, GE and DoubleClick. In the public sector, Nuala served as the first chief privacy officer at the U.S. Department of Homeland Security. She also served as deputy director of the Office of Policy and Strategic Planning, and later as chief counsel for technology at the U.S. Department of Commerce. Nuala holds a B.A. from Princeton University, an M.Ed. from Harvard University and a J.D. from Georgetown University Law Center.    Nuala O'Connor to Join Walmart in New Digital Citizenship Role Walmart launches its own voice assistant, ‘Ask Sam,’ initially for employee use Our Responsible AI Pledge: Setting the Bar for Ethical AI Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Join Kevin and Suresh as they discuss the latest tools and frameworks that companies can use to effectively combat algorithmic bias, all while navigating the complexities of integrating AI into organizational strategies. Suresh describes his experiences at the White House Office of Science and Technology Policy and the creation of the Blueprint for an AI Bill of Rights, including its five fundamental principles—safety and effectiveness, non-discrimination, data minimization, transparency, and accountability. Suresh and Kevin dig into the economic and logistical challenges that academics face in government roles and highlight the importance of collaborative efforts alongside clear rules to follow in fostering ethical AI. The discussion highlights the importance of education, cultural shifts, and the role of the European Union's AI Act in shaping global regulatory frameworks. Suresh discusses his creation of Brown University's Center on Technological Responsibility, Reimagination, and Redesign, and why trust and accountability are paramount, especially with the rise of Large Language Models.   Suresh Venkatasubramanian is a Professor of Data Science and Computer Science at Brown University. Suresh's background is in algorithms and computational geometry, as well as data mining and machine learning. His current research interests lie in algorithmic fairness, and more generally the impact of automated decision-making systems in society. Prior to Brown University, Suresh was at the University of Utah, where he received a CAREER award from the NSF for his work in the geometry of probability. He has received a test-of-time award at ICDE 2017 for his work in privacy. His research on algorithmic fairness has received press coverage across North America and Europe, including NPR’s Science Friday, NBC, and CNN, as well as in other media outlets. For the 2021–2022 academic year, he served as Assistant Director for Science and Justice in the White House Office of Science and Technology Policy.    Blueprint for an AI Bill of Rights Brown University's Center on Technological Responsibility, Reimagination, and Redesign Brown professor Suresh Venkatasubramanian tackles societal impact of computer science at White House   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.  
Kevin Werbach speaks with Diya Wynn, the responsible AI lead at Amazon Web Services (AWS). Diya shares how she pioneered a formal practice for ethical AI at AWS, and explains AWS’s “Well-Architected” framework to assist customers in responsibly deploying AI. Kevin and Diya also discuss the significance of diversity and human bias in AI systems, revealing the necessity of incorporating diverse perspectives to create more equitable AI outcomes.  Diya Wynn leads a team at AWS that helps customers implement responsible AI practices. She has over 25 years of experience as a technologist scaling products for acquisition; driving inclusion, diversity & equity initiatives; and leading operational transformation. She serves on the AWS Health Equity Initiative Review Committee; mentors at Tulane University, Spelman College, and GMI; was a mayoral appointee in Environment Affairs for six years; and guest lectures regularly on responsible and inclusive technology. Responsible AI for the greater good: insights from AWS’s Diya Wynn  Ethics In AI: A Conversation With Diya Wynn, AWS Responsible AI Lead   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach is joined by Paula Goldman, Chief Ethical and Humane Use Officer at Salesforce, to discuss the pioneering efforts of her team in building a culture of ethical technology use. Paula shares insights on aligning risk assessments and technical mitigations with business goals to bring stakeholders on board. She explains how AI governance functions in a large business with enterprise customers, who have distinctive needs and approaches. Finally, she highlights the shift from "human in the loop" to "human at the helm" as AI technology advances, stressing that today's investments in trustworthy AI are essential for managing tomorrow’s more advanced systems. Paula Goldman leads Salesforce in creating a framework to build and deploy ethical technology that optimizes social benefit. Prior to Salesforce, she served Global Lead of the Tech and Society Solutions Lab at Omidyar Network, and has extensive entrepreneurial experience managing frontier market businesses. Creating safeguards for the ethical use of technology Trusted AI Needs a Human at the Helm Responsible Use of Technology: The Salesforce Case Study   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach speaks with Navrina Singh of Credo AI, which automates AI oversight and regulatory compliance. Singh addresses the increasing importance of trust and governance in the AI space. She discusses the need to standardize and scale oversight mechanisms by helping companies align and translate their systems to include all stakeholders and comply with emerging global standards. Kevin and Navrina also explore the importance of sociotechnical approaches to AI governance, the necessity of mandated AI disclosures, the democratization of generative AI, adaptive policymaking, and the need for enhanced AI literacy within organizations to keep pace with evolving technologies and regulatory landscapes. Navrina Singh is the Founder and CEO of Credo AI, a Governance SaaS platform empowering enterprises to deliver responsible AI. Navrina previously held multiple product and business leadership roles at Microsoft and Qualcomm. She is a member of the U.S. Department of Commerce National Artificial Intelligence Advisory Committee (NAIAC), an executive board member of Mozilla Foundation, and a Young Global Leader of the World Economic Forum.  Credo.ai ISO/ 42001 standard for AI governance Navrina Singh Founded Credo AI To Align AI With Human Values   Want to learn more? ​​Engage live with Professor Werbach and other Wharton faculty experts in Wharton's new Strategies for Accountable AI online executive education program. It's perfect for managers, entrepreneurs, and advisors looking to harness AI’s power while addressing its risks.
Kevin Werbach speaks with Scott Zoldi of FICO, which pioneered consumer credit scoring in the 1950s and now offers a suite of analytics and fraud detection tools. Zoldi explains the importance of transparency and interpretability in AI models, emphasizing a “simpler is better” approach to creating clear and understandable algorithms. He discusses FICO's approach to responsible AI, which includes establishing model governance standards, and enforcing these standards through the use of blockchain technology. Zoldi explains how blockchain provides an immutable record of the model development process, enhancing accountability and trust. He also highlights the challenges organizations face in implementing responsible AI practices, particularly in light of upcoming AI regulations, and stresses the need for organizations to catch up in defining governance standards to ensure trustworthy and accountable AI models. Dr. Scott Zoldi is Chief Analytics Officer of  FICO, responsible for analytics and AI innovation across FICO's portfolio. He has authored more than 130 patents, and is a long-time advocate and inventor in the space of responsible AI. He was nomianed for American Banker’s 2024 Innovator Award and received Corinium’s Future Thinking Award in 2022. Zoldi is a member of the Board of Advisors for FinReg Lab, and serves on the Boards of Directors of Software San Diego and San Diego Cyber Center of Excellence. He received his Ph.D. in theoretical and computational physics from Duke University.   Navigating the Wild AI with Dr. Scott Zoldi   How to Use Blockchain to Build Responsible AI   The State of Responsible AI in Financial Services
Professor Kevin Werbach and AI ethicist Olivia Gambelin discuss the moral responsibilities surrounding Artificial Intelligence, and the practical steps companies should take to address tehm. Olivia explains how companies can begin their responsible AI journey, starting with taking inventory of their systems and using Olivia's Value Canvas to map the ethical terrain. Kevin and Olivia delve into the potential reasons companies avoid investing in ethical AI, the financial and compliance benefits of making the investment, and best practices of companies who succeed in AI governance. Olivia also discusses her initiative to build a network of responsible AI practitioners and promote development of the field. Olivia Gameblin is founder and CEO of Ethical Intelligence, an advisory firm specializing in Ethics-as-a-Service for businesses ranging from Fortune 500 companies to Series A startups. Her book, Responsible AI, offers a comprehensive guide to integrating ethical practices for AI deployment. She serves on the Founding Editorial Board for Springer Nature’s AI and Ethics Journal, co-chairs the IEEE AI Expert Network Criteria Committee, and advises the Ethical AI Governance Group and The Data Tank. She is deeply involved in both the Silicon Valley startup ecosystem and advising on AI policy and regulation in Europe.  Olivia Gameblin’s Website Responsible AI: Implement an Ethical Approach in Your Organization The EI (Ethical Intelligence) Network  The Values Canvas  
Join Professor Kevin Werbach and Beth Noveck, New Jersey's first Chief AI Strategist, as they explore AI's transformative power in public governance. Beth reveals how AI is revolutionizing government operations, from rewriting complex unemployment insurance letters in plain English to analyzing call data for faster responses. They discuss New Jersey's innovative use of generative AI to cut response times in half, empowering public servants to better serve their communities while balancing ethical considerations and privacy concerns. Learn about New Jersey's training programs, sandboxes, and pilot projects designed to integrate AI safely into public service. Beth also shares inspiring global examples, like Taiwan's citizen-engaged decision-making processes and Iceland's Better Reykjavik initiative, which inform local projects like New Jersey's mycareernj.gov career coaching tool.  Beth Simone Noveck directs the Governance Lab (GovLab) at New York University's Tandon School of Engineering. As the inaugural U.S. Deputy Chief Technology Officer and leader of the White House Open Government Initiative under President Obama, she crafted innovative strategies to enhance governmental transparency, cooperation, and public engagement. Noveck authored "Wiki Government," a seminal work advocating for the use of digital tools to revolutionize civic interaction. Her roles have included Chief Innovation Officer for New Jersey and Senior Advisor for the Open Government Initiative, earning her wide acclaim and numerous accolades for her contributions to the field. Noveck's work emphasizes the transformative potential of technology in fostering more open, transparent, and participatory governance structures. Open Government Initiative  The GovLab Wiki Government Beth Noveck TED Talk: Demand a more open-source government  
Join Professor Kevin Werbach and Jean-Enno Charton, Director of Digital Ethics and Bioethics at Merck KGAA, as they explore the ethical challenges of AI in healthcare and life sciences. Charton delves into the evolution of Merck's AI ethics program, which stemmed from their bioethics advisory panel addressing complex ethical dilemmas in areas like fertility research and clinical trials. He details the formation of a dedicated digital ethics panel, incorporating industry experts and academics, and developing the Principle at Risk Analysis (PARA) tool to identify and mitigate ethical risks in AI applications. Highlighting the significance of trust, transparency, and pragmatic solutions, Charton discusses how these principles are applied across Merck's diverse business units. Listen in to thoroughly examine the intersection between bioethics, trust, and AI. Jean-Enno Charton is the Chief Data and AI Officer at Merck KGAA, a global pharmaceutical and life sciences company. He chairs the Digital Ethics Advisory Panel, focusing on ethical data use and AI applications within the company. Charton led the development of Merck's Code of Digital Ethics, guiding ethical principles such as autonomy, justice, and transparency in digital initiatives. A recognized speaker on digital ethics, his work contributes to responsible data-driven technology deployment in the healthcare and life sciences sector. Merck Code of Digital Ethics IEEE Ethically Aligned Design Principle-at-Risk Analysis  
Join Professor Kevin Werbach and Dominique Shelton Leipzig, an expert in data privacy and technology law, as they share practical insights on AI's transformative potential and regulatory challenges in this episode on The Road to Accountable AI. They dissect the ripple effects of recent legislation, and why setting industry standards and codifying trust in AI are more than mere legal checkboxes—they're the bedrock of innovation and integrity in business. Transitioning from theory to practice, this episode uncovers what it truly means to govern AI systems that are accurate, safe, and respectful of privacy. Kevin and Dominique navigate through the high-risk scenarios outlined by the EU and discuss how companies can future-proof their brands by adopting AI governance strategies.  Dominique Shelton Leipzig is a partner and head of the Ad Tech Privacy & Data Management team and the Global Data Innovation team at the law firm Mayer Brown. She is the author of the recent book Trust: Responsible AI, Innovation, Privacy and Data Leadership. Dominique co-founded NxtWork, a non-profit aimed at diversifying leadership in corporate America, and has trained over 50,000 professionals in data privacy, AI, and data leadership. She has been named a "Legal Visionary" by the Los Angeles Times, a "Top Cyber Lawyer" by the Daily Journal, and a "Leading Lawyer" by Legal 500.  Trust: Responsible AI, Innovation, Privacy and Data Leadership Mayer Brown Digital Trust Summit A Framework for Assessing AI Risk Dominique’s Data Privacy Recommendation Enacted in Biden’s EO  
Join Professor Werbach and Dragos Tudorache, co-rapporteur of the EU AI Act and one of the most essential AI policymakers in the world, to discuss the world's urgent need for AI regulation and collaboration. They discuss the nuances of the Act's attempt to balance mitigating risk and fostering innovation and dissect the legislation's approach to ensuring trust in AI through technology-neutral language and adaptable mechanisms. Next, they examine the Act's strategic focus on high-risk AI applications, filling gaps not covered by existing EU tech regulations like GDPR and the Digital Services Act. The conversation delves into the rapid integration of generative AI provisions into the Act, its enforcement challenges similar to those experienced with GDPR, and the critical importance of global coordination in AI policy. Tudorache contrasts AI policies between the EU and the US and offers practical advice for businesses preparing for the AI Act's implementation. The discussion also touches on the potential need for future regulations, providing critical insights for stakeholders in the AI sector. Dragos Tudorache, a Romanian member of the European Parliament and a key figure in European AI policy, has significantly shaped the discourse around AI regulation since his election in 2019. His dedicated leadership in chairing the Special Committee on Artificial Intelligence in the Digital Age (AIDA) set the groundwork for critical legislative efforts. As one of the two principal negotiators of the EU's pioneering AI Act, Tudorache played an instrumental role in crafting the first comprehensive AI law globally.  EU AI Act Washington Post calls Dragos Tudorache "The Smartest Politician on AI"    
In this episode, Professor Kevin Werbach and Dr. Richard Benjamins, former Chief Responsible AI Officer at Telefónica, discuss Telefónica's journey towards responsible AI, corporate strategies to address these challenges, and Benjamins' views on generative AI and the European AI Act. They explore the role of the Responsible AI Champion in embedding ethical practices across business functions and discuss the dynamics of AI ethics committees, which vary from permission-based to problem-based frameworks. Their conversation highlights the scarcity of AI ethics expertise and the diverse requirements for ethical oversight depending on a company's AI integration level. Benjamins' insights significantly enhance understanding of the global AI regulatory landscape and its future implications. Dr. Richard Benjamins is a leading expert in responsible AI, previously serving as Chief Responsible AI Officer and Chief AI and Data Strategist at Telefónica. Recognized among the top 100 influential figures in data-driven business, he co-founded OdiseIA, an observatory focused on the ethical and social impacts of AI. His career highlights include being the Group Chief Data Officer at AXA and holding key roles at Telefónica, contributing extensively to Big Data and Analytics. Dr. Benjamins founded Telefónica's Big Data for Social Good department, showcasing his commitment to leveraging data for societal benefits. With an academic background in Cognitive Science and Artificial Intelligence, he has also been involved in academia and research across the globe. His advisory role at BigML underscores his dedication to making machine learning more accessible. His projects have ranged from environmental efforts like air quality measurement in Madrid to healthcare advancements during the pandemic, embodying his advocacy for responsible use of AI to address both current and future challenges in the field. The AI Way in Telefónica UNESCO Ethics of Artificial Intelligence Telefónica’s Approach to the Use of Responsible AI  From AI Principles to Responsible AI: Challenges EU AI Act  
Join Professor Kevin Werbach and Elham Tabassi, Associate Director for Emerging Technologies at NIST, as they explore the nuances of the NIST AI Risk Management Framework (RMF). Their discussion illuminates the science of measurement, the challenges of operationalizing legal and governance requirements for AI, and the pursuit of standards for trustworthy AI. During the conversation, they delve into the diverse challenges businesses face when adopting the AI RMF and how the Trustworthy AI Resource Center serves as a critical tool to keeping entities at the forefront of technology. They examine the intricate task of measuring AI systems, tackling biases of all kinds, and overcoming the socio-technical barriers that influence the development of standards. With NIST’s non-regulatory role fostering transformative policies, the discussion offers insights into the responsible growth of AI technologies, setting the stage for a future where AI safety and trustworthiness are tangible realities. Elham Tabassi is Associate Director of Emerging Technologies at NIST, the National Institute of Standards and Technology in the US Commerce Department, and Chief Technologist of the US government’s new AI Safety Institute. She was named by Time Magazine as one of the 100 most influential people in AI, because of her role in the NIST AI Risk Management Framework, or RMF. Elham Tabassi: Time 100 Most Influential People in AI NIST AI Risk Management Framework v. 1.0 Road Map for the AI Risk Management Framework Fact Sheet: Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence  
Join Professor Werbach and Reid Blackman, the founder of Virtue and author of ‘Ethical Machines,’ to unravel the fabric of ethical responsibilities in AI development. They navigate the transition from 'AI ethics' to 'responsible AI,' debunking the myth that ethics is just a matter of opinion and showcasing how it can be both objective and actionable. The conversation addresses the ethical pitfalls of AI, such as biases and privacy concerns, emphasizing that these issues often arise from the technology itself. They explore strategies for promoting ethical AI within organizations, including the implementation of robust policies and risk committees. Highlighting the tension between ethics and profitability, the episode also celebrates AI's potential to improve efficiency. The pivotal role of expert guidance in leading companies toward responsible AI use is underscored, painting a future where technology enhances human capabilities ethically. Reid Blackman, Ph.D. and founder of Virtue Consultants, leverages his philosophy background to infuse AI development and deployment with ethical considerations, aiming to mitigate risks such as bias, privacy violations, and the opacity of AI systems. Through Virtue, he operationalizes ethics in technology by embedding cross-functional teams into companies' product development and corporate culture, addressing both theoretical and practical challenges. His work spans advising prominent organizations like AWS, the FBI, NASA, and the World Economic Forum, showcasing his extensive influence in promoting responsible AI practices across various sectors. Reid's book Ethical Machines The EU's AI Act and How Companies Can Achieve Compliance  How to Avoid the Ethical Nightmares of Emerging Technology Reid's podcast
Professor Werbach is joined by Azeem Azhar, a leading expert in exponential technologies, for a riveting conversation on the trajectory of AI, regulation, and the larger challenges of concentration in the tech sector. They traverse Azeem’s professional journey, highlighting the pivotal moments in AI development, such as the rise of deep learning, and discuss the implications for business leaders now at the helm of these potent tools. Drawing parallels with historical tech calamities, they examine the safety challenges inherent in large language models and how companies like Google and OpenAI juggle the race for innovation with the necessity for thorough testing. The conversation then delves into the murky waters of regulation and the tug-of-war between progress and control, with a spotlight on the EU's Digital Markets Act and its impact on global tech firms.  Azeem Azhar is the author of the bestseller "Exponential: How Accelerating Technology is Leaving Us Behind and What to Do About It", which quickly became an Amazon bestseller in Geopolitics upon its release. As the founder of the data analytics firm PeerIndex, later acquired by Brandwatch, Azeem has a proven track record as an angel investor, with investments in over 30 startups, including early-stage companies in AI, renewable energy, and female healthtech. Some of his most notable interviews include discussions with OpenAI CEO Sam Altman, co-founder and CEO of Anthropic Dario Amodei, and legendary Silicon Valley investor Vinod Khosla. These conversations cover a wide range of topics, including the implications of AI on ownership of thoughts, the potential impact of AI on global inequality, and the need to change assumptions about conflict to avoid a second Cold War. His ability to break down complex technological concepts and their societal implications has earned him recognition as a global futurist and exponential thinker, making his contributions invaluable for understanding the rapidly evolving technological landscape.  Exponential View, Azeem’s Substack and community Azeem’s book The Exponential Age Azeem and Sam Altman's Discussion EU AI Act
Welcome to the Road to Accountable AI. Explore the crucial intersection of technology, law, and business ethics with Wharton professor Kevin Werbach, as he and his guests examine efforts to implement responsible, safe and trustworthy artificial intelligence. In this initial episode, Professor Werbach describes the concept he calls Accountable AI. He talks about his background in emerging technology over the past three decades, starting with his experience leading internet policy at the U.S. Federal Communications Commission during the early years of the commercial internet. He explains why AI has such revolutionary potential today, while at the same time raising serious legal, ethical, and public policy concerns. He provides five reasons why why companies should take Accountable AI seriously,  Look for upcoming episodes featuring top AI experts such as Azeem Azhar (Exponential View), Reid Blackman (Author of Ethical Machines), Elham Tabassi (NIST), Dragos Tudorache (European Parliament), Dominique Shelton Leipzig (Mayer Brown), Scott Zoldi (FICO), Navrina Singh (Credo AI), and Paula Goldman (Salesforce). Accountable AI Website Professor Werbach’s Substack Professor Werbach’s personal page Pew Research Center Survey on Americans’ Views of AI DataRobot State of AI Bias Report KPMG US AI Risk Survey Report The Blockchain and the New Architecture of Trust  
Comments 
loading