DiscoverPondering AI
Pondering AI
Claim Ownership

Pondering AI

Author: Kimberly Nevala, Strategic Advisor - SAS

Subscribed: 18Played: 52
Share

Description

How is the use of artificial intelligence (AI) shaping our human experience?

Kimberly Nevala ponders the reality of AI with a diverse group of innovators, advocates and data scientists. Ethics and uncertainty. Automation and art. Work, politics and culture. In real life and online. Contemplate AI’s impact, for better and worse.

All presentations represent the opinions of the presenter and do not represent the position or the opinion of SAS.
53 Episodes
Reverse
Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs. Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications. Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .  A transcript of this episode is here. 
Miriam Vogel disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI. Miriam Vogel traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.  Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents. Miriam Vogel is the President and CEO of Equal AI, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (NAIAC). A transcript of this episode is here. 
Melissa Sariffodeen contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique. Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.Melissa Sariffodeen is the founder of the The Digital Potential Lab, co-founder and CEO of Canada Learning Code and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.A transcript of this episode is here.
Shannon Mullen O’Keefe champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.    Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book 10 Moral Questions: How to Design Tech and AI Responsibly. We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off. Shannon Mullen O’Keefe is the Curator of the Museum of Ideas and co-author of the Q Collective’s book 10 Moral Questions: How to Design Tech and AI Responsibly. Learn more at https://www.10moralquestions.com/. A transcript of this episode is here. 
Sarah Gibbons and Kate Moran riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.   Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.  Sarah Gibbons and Kate Moran are Vice Presidents at Nielson Norman Group where they lead strategy, research, and design in the areas of human-centered design and user experience (UX). A transcript of this episode is here. 
Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.A transcript of this episode is here.
Professor Rose Luckin provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.      Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself. Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of EDUCATE Ventures Research Ltd., a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book Machine Learning and Human Intelligence (free after creating account) and the EDUCATE Ventures newsletter The Skinny. A transcript of this episode is here. 
Katrina Ingram addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.   Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.  Katrina Ingram is the Founder & CEO Ethically Aligned AI, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits. A transcript of this episode is here. 
Paulo Carvão discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.           In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”  Paulo Carvão is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the Tech and Democracy substack.  A transcript of this episode is here. 
Dr. Christina Jayne Colclough reflects on AI Regulations at Work.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Giselle Mota reflects on Inclusion at Work in the age of AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Ganes Kesari reflects on generative AI (GAI) in the Enterprise.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Chris McClean reflects on Digital Ethics and Regulation in AI today.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Dr. Erica Thompson reflects on Making Model Decisions about and with AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.To learn more, check out Erica’s book Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do About It
Roger Spitz reflects on Upskilling Human Decision Making in the age of AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.To learn more, check out Roger’s book series The Definitive Guide to Thriving on Disruption
Sheryl Cababa reflects on Systems Thinking in AI design.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Sheryl’s book Closing the Loop: Systems Thinking for Designers
Ilke Demir reflects on Generative AI (GAI) Detection and Protection.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.
Professor J Mark Bishop reflects on large language models (LLM) and beyond.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  
Henrik Skaug Sætra reflects on Environmental and Social Sustainability with AI.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next. To learn more, check out Henrik’s latest book: Technology and Sustainable Development: The Promise and Pitfalls of Techno-Solutionism
Yonah Welker reflects on Policymaking, Inclusion and Accessibility in AI today.In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.  
loading
Comments