
Author:
Subscribed: 0Played: 0Subscribe
Share
Description
Episodes
Reverse
This is going to be the most productive decade in the history of our species, says Mustafa Suleyman, author of “The Coming Wave,” CEO of Inflection AI, and founder of Google’s DeepMind. But in order to truly reap the benefits of AI, we need to learn how to contain it. Paradoxically, part of that will mean collectively saying no to certain forms of progress. As an industry leader reckoning with a future that’s about to be ‘turbocharged’ Mustafa says we can all play a role in shaping the technology in hands-on ways and by advocating for appropriate governance.RECOMMENDED MEDIA The Coming Wave: Technology, Power, and the 21st Century’s Greatest DilemmaThis new book from Mustafa Suleyman is a must-read guide to the technological revolution just starting, and the transformed world it will createPartnership on AIPartnership on AI is bringing together diverse voices from across the AI community to create resources for advancing positive outcomes for people and societyPolicy Reforms Toolkit from the Center for Humane TechnologyDigital lawlessness has been normalized in the name of innovation. It’s possible to craft policy that protects the conditions we need to thriveRECOMMENDED YUA EPISODES AI Myths and MisconceptionsCan We Govern AI? with Marietje SchaakeThe AI DilemmaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Last week, Senator Chuck Schumer brought together Congress and many of the biggest names in AI for the first closed-door AI Insight Forum in Washington, D.C. Tristan and Aza were invited speakers at the event, along with Elon Musk, Satya Nadella, Sam Altman, and other leaders. In this update on Your Undivided Attention, Tristan and Aza recount how they felt the meeting went, what they communicated in their statements, and what it felt like to critique Meta’s LLM in front of Mark Zuckerberg.Correction: In this episode, Tristan says GPT-3 couldn’t find vulnerabilities in code. GPT-3 could find security vulnerabilities, but GPT-4 is exponentially better at it.RECOMMENDED MEDIA In Show of Force, Silicon Valley Titans Pledge ‘Getting This Right’ With A.I.Elon Musk, Sam Altman, Mark Zuckerberg, Sundar Pichai and others discussed artificial intelligence with lawmakers, as tech companies strive to influence potential regulationsMajority Leader Schumer Opening Remarks For The Senate’s Inaugural AI Insight ForumSenate Majority Leader Chuck Schumer (D-NY) opened the Senate’s inaugural AI Insight ForumThe Wisdom GapAs seen in Tristan’s talk on this subject in 2022, the scope and speed of our world’s issues are accelerating and growing more complex. And yet, our ability to comprehend those challenges and respond accordingly is not matching paceRECOMMENDED YUA EPISODESSpotlight On AI: What Would It Take For This to Go Well?The AI ‘Race’: China vs. the US with Jeffrey Ding and Karen HaoSpotlight: Elon, Twitter and the Gladiator Arena Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Where do the top Silicon Valley AI researchers really think AI is headed? Do they have a plan if things go wrong? In this episode, Tristan Harris and Aza Raskin reflect on the last several months of highlighting AI risk, and share their insider takes on a high-level workshop run by CHT in Silicon Valley. NOTE: Tristan refers to journalist Maria Ressa and mentions that she received 80 hate messages per hour at one point. She actually received more than 90 messages an hour.RECOMMENDED MEDIA Musk, Zuckerberg, Gates: The titans of tech will talk AI at private Capitol summitThis week will feature a series of public hearings on artificial intelligence. But all eyes will be on the closed-door gathering convened by Senate Majority Leader Chuck SchumerTakeaways from the roundtable with President Biden on artificial intelligenceTristan Harris talks about his recent meeting with President Biden to discuss regulating artificial intelligenceBiden, Harris meet with CEOs about AI risksVice President Kamala Harris met with the heads of Google, Microsoft, Anthropic, and OpenAI as the Biden administration rolled out initiatives meant to ensure that AI improves lives without putting people’s rights and safety at riskRECOMMENDED YUA EPISODES The AI DilemmaThe AI ‘Race’: China vs the US with Jeffrey Ding and Karen HaoThe Dictator’s Playbook with Maria RessaYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In the debate over slowing down AI, we often hear the same argument against regulation. “What about China? We can’t let China get ahead.” To dig into the nuances of this argument, Tristan and Aza speak with academic researcher Jeffrey Ding and journalist Karen Hao, who take us through what’s really happening in Chinese AI development. They address China’s advantages and limitations, what risks are overblown, and what, in this multi-national competition, is at stake as we imagine the best possible future for everyone.CORRECTION: Jeffrey Ding says the export controls on advanced chips that were established in October 2022 only apply to military end-users. The controls also impose a license requirement on the export of those advanced chips to any China-based end-user.RECOMMENDED MEDIA Recent Trends in China’s Large Language Model Landscape by Jeffrey Ding and Jenny W. XiaoThis study covers a sample of 26 large-scale pre-trained AI models developed in ChinaThe diffusion deficit in scientific and technological power: re-assessing China’s rise by Jeffrey DingThis paper argues for placing a greater weight on a state’s capacity to diffuse, or widely adopt, innovationsThe U.S. Is Turning Away From Its Biggest Scientific Partner at a Precarious Time by Karen Hao and Sha HuaU.S. moves to cut research ties with China over security concerns threaten American progress in critical areasWhy China Has Not Caught Up Yet: Military-Technological Superiority and the Limits of Imitation, Reverse Engineering, and Cyber Espionage by Andrea Gilli and Mauro GilliMilitary technology has grown so complex that it’s hard to imitateRECOMMENDED YUA EPISODES The Three Rules of Humane TechA Fresh Take on Tech in China with Rui Ma and Duncan ClarkDigital Democracy is Within Reach with Audrey TangYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
For all the talk about AI, we rarely hear about how it will change our relationships. As we swipe to find love and consult chatbot therapists, acclaimed psychotherapist and relationship expert Esther Perel warns that there’s another harmful “AI” on the rise — Artificial Intimacy — and how it is depriving us of real connection. Tristan and Esther discuss how depending on algorithms can fuel alienation, and then imagine how we might design technology to strengthen our social bonds.RECOMMENDED MEDIA Mating in Captivity by Esther PerelEsther's debut work on the intricacies behind modern relationships, and the dichotomy of domesticity and sexual desireThe State of Affairs by Esther PerelEsther takes a look at modern relationships through the lens of infidelityWhere Should We Begin? with Esther PerelListen in as real couples in search of help bare the raw and profound details of their storiesHow’s Work? with Esther PerelEsther’s podcast that focuses on the hard conversations we're afraid to have at work Lars and the Real Girl (2007)A young man strikes up an unconventional relationship with a doll he finds on the internetHer (2013)In a near future, a lonely writer develops an unlikely relationship with an operating system designed to meet his every needRECOMMENDED YUA EPISODES Big Food, Big Tech and Big AI with Michael MossThe AI DilemmaThe Three Rules of Humane TechDigital Democracy is Within Reach with Audrey Tang Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
We are on the cusp of an explosion of cheap, consumer-ready neurotechnology - from earbuds that gather our behavioral data, to sensors that can read our dreams. And it’s all going to be supercharged by AI. This technology is moving from niche to mainstream - and it has the same potential to become exponential. Legal scholar Nita Farahany talks us through the current state of neurotechnology and its deep links to AI. She says that we urgently need to protect the last frontier of privacy: our internal thoughts. And she argues that without a new legal framework around “cognitive liberty,” we won’t be able to insulate our brains from corporate and government intrusion.RECOMMENDED MEDIA The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology by Nita FarahanyThe Battle for Your Brain offers a path forward to navigate the complex dilemmas that will fundamentally impact our freedom to understand, shape, and define ourselvesComputer Program Reveals What Neurons in the Visual Cortex Prefer to Look AtA study of macaque monkeys at Harvard generated valuable clues based on an artificial intelligence system that can reliably determine what neurons in the brain’s visual cortex prefer to seeUnderstanding Media: The Extensions of Man by Marshall McLuhanAn influential work by a fixture in media discourseRECOMMENDED YUA EPISODES The Three Rules of Humane TechTalking With Animals… Using AIHow to Free Our Minds with Cult Deprogramming Expert Dr. Steven HassanYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Social media was humanity’s ‘first contact’ moment with AI. If we’re going to create laws that are strong enough to prevent AI from destroying our societies, we could benefit from taking a look at the major lawsuits against social media platforms that are playing out in our courts right now.In our last episode, we took a close look at Big Food and its dangerous “race to the bottom” that parallels AI. We continue that theme this week with an episode about litigating social media and the consequences of the race to engagement in order to inform how we can approach AI harms. Our guest, attorney Laura Marquez-Garrett, left her predominantly defense-oriented practice to join the Social Media Victims Law Center in February 2022. Laura is literally on the front lines of the battle to hold social media firms accountable for the harms they have created in young people’s lives for the past decade. Listener warning: there are distressing and potentially triggering details within the episode.Correction: Tristan refers to the Social Media Victims Law Center as a nonprofit legal center. They are a for-profit law firm.RECOMMENDED MEDIA 1) If you're a parent whose child has been impacted by social media, Attorneys General in Colorado, New Hampshire, and Tennessee are asking to hear your story. Your testimonies can help ensure that social media platforms are designed safely for kids. For more information, please visit the respective state links.ColoradoNew HampshireTennessee2) Social Media Victims Law CenterA non-profit legal center that was founded in 2021 in response to the testimony of Facebook whistleblower Frances Haugen3) Resources for Parents & EducatorsOverwhelmed by our broken social media environment and wondering where to start? Check out our Youth Toolkit plus three actions you can take today4) The Social DilemmaLearn how the system works. Watch and share The Social Dilemma with people you care aboutRECOMMENDED YUA EPISODES Transcending the Internet Hate Game with Dylan MarronA Conversation with Facebook Whistleblower Frances HaugenBehind the Curtain on The Social Dilemma with Jeff Orlowski-Yang and Larissa RhodesYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In the next two episodes of Your Undivided Attention, we take a close look at two respective industries: big food and social media, which represent dangerous “races to the bottom” and have big parallels with AI. And we are asking: what can our past mistakes and missed opportunities teach us about how we should approach AI harms? In this first episode, Tristan talks to Pulitzer Prize-winning journalist and author Michael Moss. His book Salt, Sugar, Fat: How the Food Giants Hooked Us rocked the fast food industry when it came out in 2014. Tristan and Michael discuss how we can leverage the lessons learned from Big Food’s coordination failures, and whether it’s the responsibility of the consumer, the government, or the companies to regulate. RECOMMENDED MEDIA Salt Sugar Fat: How the Food Giants Hooked UsMichael’s New York Times bestseller. You’ll never look at a nutrition label the same way againHooked: Food, Free Will, and How the Food Giants Exploit Our AddictionsMichael’s Pulitzer Prize-winning exposé of how the processed food industry exploits our evolutionary instincts, the emotions we associate with food, and legal loopholes in their pursuit of profit over public healthControl Your Tech UseCenter for Humane Technology’s recently updated Take Control ToolkitRECOMMENDED YUA EPISODESAI Myths and MisconceptionsThe AI DilemmaHow Might a long-term stock market transform tech? (ZigZag episode) Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
What happens when creators consider what lifelong human development looks like in terms of the tools we make? And what philosophies from Sesame Street can inform how to steward the power of AI and social media to influence minds in thoughtful, humane directions?When the first episode of Sesame Street aired on PBS in 1969, it was unlike anything that had been on television before - a collaboration between educators, child psychologists, comedy writers and puppeteers - all working together to do something that had never been done before: create educational content for children on television. Fast-forward to the present: could we switch gears to reprogram today’s digital tools to humanely educate the next generation? That’s the question Tristan Harris and Aza Raskin explore with Dr. Rosemarie Truglio, the Senior Vice President of Curriculum and Content for the Sesame Workshop, the non-profit behind Sesame Street. RECOMMENDED MEDIA Street Gang: How We Got to Sesame StreetThis documentary offers a rare window into the early days of Sesame Street, revealing the creators, artists, writers and educators who together established one of the most influential and enduring children’s programs in television historySesame Street: Ready for School!: A Parent's Guide to Playful Learning for Children Ages 2 to 5 by Dr. Rosemarie TruglioRosemarie shares all the research-based, curriculum-directed school readiness skills that have made Sesame Street the preeminent children's TV programG Is for Growing: Thirty Years of Research on Children and Sesame Street co-edited by Shalom Fisch and Rosemarie TruglioThis volume serves as a marker of the significant role that Sesame Street plays in the education and socialization of young childrenThe Democratic Surround by Fred TurnerIn this prequel to his celebrated book From Counterculture to Cyberculture, Turner rewrites the history of postwar America, showing how in the 1940s and 1950s American liberalism offered a far more radical social vision than we now rememberAmusing Ourselves to Death by Neil PostmanNeil Postman’s groundbreaking book about the damaging effects of television on our politics and public discourse has been hailed as a twenty-first-century book published in the twentieth centurySesame Workshop Identity Matters StudyExplore parents’ and educators’ perceptions of children’s social identity developmentEffects of Sesame Street: A meta-analysis of children's learning in 15 countriesCommissioned by Sesame Workshop, the study was led by University of Wisconsin researchers Marie-Louise Mares and Zhongdang PanU.S. Parents & Teachers See an Unkind World for Their Children, New Sesame Survey ShowsAccording to the survey titled, “K is for Kind: A National Survey On Kindness and Kids,” parents and teachers in the United States worry that their children are living in an unkind worldRECOMMENDED YUA EPISODESAre the Kids Alright? With Jonathan HaidtThe Three Rules of Humane TechWhen Media Was for You and Me with Fred Turner Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
You’re likely familiar with the modern zombie trope: a zombie bites someone you care about and they’re transformed into a creature who wants your brain. Zombies are the perfect metaphor to explain something Tristan and Aza have been thinking about lately that they call zombie values.In this Spotlight episode of Your Undivided Attention, we talk through some examples of how zombie values limit our thinking around tech harms. Our hope is that by the end of this episode, you'll be able to recognize the zombie values that walk amongst us, and think through how to upgrade these values to meet the realities of our modern world. RECOMMENDED MEDIA Is the First Amendment Obsolete?This essay explores free expression challengesThe Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themRECOMMENDED YUA EPISODES A Problem Well-Stated is Half Solved with Daniel SchmachtenbergerHow To Free Our Minds with Cult Deprogramming Expert Steve HassanYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
There’s really no one better than veteran tech journalist Kara Swisher at challenging people to articulate their thinking. Tristan Harrris recently sat down with her for a wide ranging interview on AI risk. She even pressed Tristan on whether he is a doomsday prepper. It was so great, we wanted to share it with you here. The interview was originally on Kara’s podcast ON with Kara Swisher. If you like it and want to hear more of Kara’s interviews with folks like Sam Altman, Reid Hoffman and others, you can find more episodes of ON with Kara Swisher here: https://link.chtbl.com/_XTWwg3kRECOMMENDED YUA EPISODES AI Myths and MisconceptionsThe AI DilemmaThe Three Rules of Humane TechYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Democracy in action has looked the same for generations. Constituents might go to a library or school every one or two years and cast their vote for people who don't actually represent everything that they care about. Our technology is rapidly increasing in sophistication, yet our forms of democracy have largely remained unchanged. What would an upgrade look like - not just for democracy, but for all the different places that democratic decision-making happens?On this episode of Your Undivided Attention, we’re joined by political economist and social technologist Divya Siddarth, one of the world's leading experts in collective intelligence. Together we explore how new kinds of governance can be supported through better technology, and how collective decision-making is key to unlocking everything from more effective elections to better ways of responding to global problems like climate change.Correction:Tristan mentions Elon Musk’s attempt to manufacture ventilators early on in the COVID-19 pandemic. Musk ended up buying over 1,200 ventilators that were delivered to California.RECOMMENDED MEDIAAgainst Democracy by Jason BrennanA provocative challenge to one of our most cherished institutionsLedger of HarmsTechnology platforms have created a race for human attention that’s unleashed invisible harms to society. Here are some of the costs that aren't showing up on their balance sheetsThe Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themDemocracyNextDemocracyNext is working to design and establish new institutions for government and transform the governance of organizations that influence public lifeCIP.orgAn incubator for new governance models for transformative technologyEtheloTransform community engagement through consensusKazm’s Living Room ConversationsLiving Room Conversations works to heal society by connecting people across divides through guided conversations proven to build understanding and transform communitiesThe Citizens DialogueA model for citizen participation in Ostbelgien, which was brought to life by the parliament of the German-speaking communityAsamblea Ciudadana Para El ClimaSpain’s national citizens’ assembly on climate changeClimate Assembly UKThe UK’s national citizens’ assembly on climate changeCitizens’ Convention for the ClimateFrance’s national citizens’ assembly on climate changePolisPolis is a real-time system for gathering, analyzing and understanding what large groups of people think in their own words, enabled by advanced statistics and machine learningRECOMMENDED YUA EPISODESDigital Democracy is Within Reach with Audrey Tang They Don’t Represent Us with Larry LessigA Renegade Solution to Extractive Economics with Kate RaworthYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
A few episodes back, we presented Tristan Harris and Aza Raskin’s talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.The talk resonated - over 1.6 million people have viewed it on YouTube as of this episode’s release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.However, now that so many people have watched or listened to the talk, we’ve found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions. RECOMMENDED MEDIA Opinion | Yuval Harari, Tristan Harris, and Aza Raskin on Threats to Humanity Posed by AI - The New York TimesIn this New York Times piece, Yuval Harari, Tristan Harris, and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents.Misalignment, AI & MolochA deep dive into the game theory and exponential growth underlying our modern economic system, and how recent advancements in AI are poised to turn up the pressure on that system, and its wider environment, in ways we have never seen beforeRECOMMENDED YUA EPISODESThe AI DilemmaThe Three Rules of Humane TechCan We Govern AI? with Marietje SchaakeYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
Despite our serious concerns about the pace of deployment of generative artificial intelligence, we are not anti-AI. There are uses that can help us better understand ourselves and the world around us. Your Undivided Attention co-host Aza Raskin is also co-founder of Earth Species Project, a nonprofit dedicated to using AI to decode non-human communication. ESP is developing this technology both to shift the way that we relate to the rest of nature, and to accelerate conservation research.Significant recent breakthroughs in machine learning have opened ways to encode both human languages and map out patterns of animal communication. The research, while slow and incredibly complex, is very exciting. Picture being able to tell a whale to dive to avoid ship strikes, or to forge cooperation in conservation areas. These advances come with their own complex ethical issues. But understanding non-human languages could transform our relationship with the rest of nature and promote a duty of care for the natural world.In a time of such deep division, it’s comforting to know that hidden underlying languages may potentially unite us. When we study the patterns of the universe, we’ll see that humanity isn’t at the center of it. Corrections:Aza refers to the founding of Earth Species Project (ESP) in 2017. The organization was established in 2018.When offering examples of self-awareness in animals, Aza mentions lemurs that get high on centipedes. They actually get high on millipedes. RECOMMENDED MEDIA Using AI to Listen to All of Earth’s SpeciesAn interactive panel discussion hosted at the World Economic Forum in San Francisco on October 25, 2022. Featuring ESP President and Cofounder Aza Raskin; Dr. Karen Bakker, Professor at UBC and Harvard Radcliffe Institute Fellow; and Dr. Ari Friedlaender, Professor at UC Santa CruzWhat A Chatty Monkey May Tell Us About Learning to TalkThe gelada monkey makes a gurgling sound that scientists say is close to human speechLemurs May Be Making Medicine Out of MillipedesRed-fronted lemurs appear to use plants and other animals to treat their afflictionsFathom on AppleTV+Two biologists set out on an undertaking as colossal as their subjects – deciphering the complex communication of whales Earth Species Project is Hiring a Director of ResearchESP is looking for a thought leader in artificial intelligence with a track record of managing a team of researchers RECOMMENDED YUA EPISODES The Three Rules of Humane TechThe AI DilemmaSynthetic Humanity: AI & What’s At Stake Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
When it comes to AI, what kind of regulations might we need to address this rapidly developing new class of technologies? What makes regulating AI and runaway tech in general different from regulating airplanes, pharmaceuticals, or food? And how can we ensure that issues like national security don't become a justification for sacrificing civil rights?Answers to these questions are playing out in real time. If we wait for more AI harms to emerge before proper regulations are put in place, it may be too late. Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI’s complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products into the world without oversight. Correction: Marietje said antitrust laws in the US were a century ahead of those in the EU. Competition law in the EU was enacted as part of the Treaty of Rome in 1957, almost 70 years after the US. RECOMMENDED MEDIA The AI Dilemma Tristan Harris and Aza Raskin’s presentation on existing AI capabilities and the catastrophic risks they pose to a functional society. Also available in the podcast format (linked below)The Wisdom GapThis blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of themThe EU’s Digital Services Act (DSA) & Digital Markets Act (DMA)The two pieces of legislation aim to create safer and more open digital spaces for individuals and businesses alike RECOMMENDED YUA EPISODESDigital Democracy is Within Reach with Audrey TangThe AI DilemmaThe Three Rules of Humane TechYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now. RECOMMENDED MEDIA We Think in 3D. Social Media Should, TooTristan Harris writes about a simple visual experiment that demonstrates the power of one’s point of viewLet’s Think About Slowing Down AIKatja Grace’s piece about how to avert doom by not building the doom machineIf We Don’t Master AI, It Will Master UsYuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece RECOMMENDED YUA EPISODES The AI DilemmaSynthetic humanity: AI & What’s At Stake Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities - yet it has already been deployed to the public.At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.RECOMMENDED MEDIAAI ‘race to recklessness’ could have dire consequences, tech experts warn in new interviewTristan Harris and Aza Raskin sit down with Lester Holt to discuss the dangers of developing AI without regulationThe Day After (1983)This made-for-television movie explored the effects of a devastating nuclear holocaust on small-town residents of KansasThe Day After discussion panelModerated by journalist Ted Koppel, a panel of present and former US officials, scientists and writers discussed nuclear weapons policies live on television after the film airedZia Cora - Submarines “Submarines” is a collaboration between musician Zia Cora (Alice Liu) and Aza Raskin. The music video was created by Aza in less than 48 hours using AI technology and published in early 2022RECOMMENDED YUA EPISODES Synthetic humanity: AI & What’s At StakeA Conversation with Facebook Whistleblower Frances HaugenTwo Million Years in Two Hours: A Conversation with Yuval Noah HarariYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
A few months ago on Your Undivided Attention, we released a Spotlight episode on TikTok's national security risks. Since then, we've learned more about the dangers of the China-owned company: We've seen evidence of TikTok spying on US journalists, and proof of hidden state media accounts to influence the US elections. We’ve seen Congress ban TikTok on most government issued devices, and more than half of US states have done the same, along with dozens of US universities who are banning TikTok access from university wifi networks. More people in Western governments and media are saying that they used to believe that TikTok was an overblown threat. As we've seen more evidence of national security risks play out, there’s even talk of banning TikTok itself in certain countries. But is that the best solution? If we opt for a ban, how do we, as open societies, fight accusations of authoritarianism? On this episode of Your Undivided Attention, we're going to do a deep dive into these questions with Marc Faddoul. He's the co-director of Tracking Exposed, a nonprofit investigating the influence of social media algorithms in our lives. His work has shown how TikTok tweaks its algorithm to maximize partisan engagement in specific national elections, and how it bans international news in countries like Russia that are fighting propaganda battles inside their own borders. In other words, we don't all get the same TikTok because there are different geopolitical interests that might guide which TikTok you see. That is a kind of soft power that TikTok operates on a global scale, and it doesn’t get talked about often enough.We hope this episode leaves you with a lot to think about in terms of what the risks of TikTok are, how it's operating geopolitically, and what we can do about it.RECOMMENDED MEDIATracking Exposed Special Report: TikTok Content Restriction in RussiaHow has the Russian invasion of Ukraine affected the content that TikTok users see in Russia? [Part 1 of Tracking Exposed series]Tracking Exposed Special Report: Content Restrictions on TikTok in Russia Following the Ukrainian WarHow are TikTok’s policy decisions affecting pro-war and anti-war content in Russia? [Part 2 of Tracking Exposed series]Tracking Exposed Special Report: French Elections 2022The visibility of French candidates on TikTok and YouTube search enginesThe Democratic Surround by Fred TurnerA dazzling cultural history that demonstrates how American intellectuals, artists, and designers from the 1930s-1960s imagined new kinds of collective events that were intended to promote a powerful experience of American democracy in actionRECLOMMENDED YUA EPISODESWhen Media Was for You and Me with Fred TurnerAddressing the TikTok ThreatA Fresh Take on Tech in China with Rui Ma and Duncan ClarkYour Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
It may seem like the rise of artificial intelligence, and increasingly powerful large language models you may have heard of, is moving really fast… and it IS. But what’s coming next is when we enter synthetic relationships with AI that could come to feel just as real and important as our human relationships... And perhaps even more so. In this episode of Your Undivided Attention, Tristan and Aza reach beyond the moment to talk about this powerful new AI, and the new paradigm of humanity and computation we’re about to enter. This is a structural revolution that affects way more than text, art, or even Google search. There are huge benefits to humanity, and we’ll discuss some of those. But we also see that as companies race to develop the best synthetic relationships, we are setting ourselves up for a new generation of harms made exponentially worse by AI’s power to predict, mimic and persuade.It’s obvious we need ways to steward these tools ethically. So Tristan and Aza also share their ideas for creating a framework for AIs that will help humans become MORE humane, not less.RECOMMENDED MEDIA Cybernetics: or, Control and Communication in the Animal and the Machine by Norbert WienerA classic and influential work that laid the theoretical foundations for information theoryNew Chatbots Could Change the World. Can You Trust Them?The New York Times addresses misinformation and how Siri, Google Search, online marketing and your child’s homework will never be the sameOut of One, Many: Using Language Models to Simulate Human Samples by Lisa P. Argyle, Ethan C. Busby, Nancy Fulda, Joshua Gubler, Christopher Rytting, David WingateThis paper proposes and explores the possibility that language models can be studied as effective proxies for specific human sub-populations in social science researchEarth Species ProjectEarth Species Project, co-founded by Aza Raskin, is a non-profit dedicated to using artificial intelligence to decode non-human communicationHer (2013)A science-fiction romantic drama film written, directed, and co-produced by Spike JonzeWhat A Chatty Monkey May Tell Us About Learning To TalkNPR explores the fascinating world of gelada monkeys and the way they communicateRECOMMENDED YUA EPISODESHow Political Language is Engineered with Drew Westen & Frank LuntzWhat is Humane Technology?Down the Rabbit Hole by Design with Guillaume Chaslot
It’s easy to tell ourselves we’re living in the world we want – one where Darwinian evolution drives competing technology platforms and capitalism pushes nations to maximize GDP regardless of externalities like carbon emissions. It can feel like evolution and competition are all there is.If that’s a complete description of what’s driving the world and our collective destiny, that can feel pretty hopeless. But what if that’s not the whole story of evolution? This is where evolutionary theorist, author, and professor David Sloan Wilson comes in. He has documented where an enlightened game, one of cooperation, rather than competition, is possible. His work shows that humans can and have chosen values like cooperation, altruism and group success – versus individual competition and selfishness – at key moments in our evolution, proving that evolution isn’t just genetic. It’s cultural, and it’s a choice. In a world where our trajectory isn’t tracking in the direction we want, it's time to slow down and ask: is a different kind of conscious evolution possible? On Your Undivided Attention, we’re going to update the Darwinian principles of evolution using a critical scientific lens that can help upgrade our ability to cooperate – ranging from the small community-level, all the way to entire technology companies that can cooperate in ways that allow everyone to succeed. RECOMMENDED MEDIAThis View of Life: Completing the Darwinian Revolution by David Sloan WilsonProsocial: Using Evolutionary Science to Build Productive, Equitable, and Collaborative Groups by David Sloan WilsonAtlas Hugged: The Autobiography of John Galt III by David Sloan WilsonGoverning the Commons: The Evolution of Institutions for Collective Action by Elinor OstromHit Refresh by Satya NadellaWTF? What’s the Future and Why It’s Up to Us by Tim O’ReillyHard Drive: Bill Gates and the Making of the Microsoft Empire by James Wallace & Jim Erickson RECOMMENDED YUA EPISODES An Alternative to Silicon Valley Unicorns with Mara Zepeda & Kate “Sassy” SassoonA Problem Well-Stated is Half-Solved with Daniel Schmachtenberger Your Undivided Attention is produced by the Center for Humane Technology. Follow us on Twitter: @HumaneTech_
I hope it isn't lost on people that it is a critical organ for exchanging ideas, informing dissidents in Iran and other authoritarian controlled countries. That is of value to the Mullahs, Putin, etc.
we're supposed to be a democratic republic, but whatever...we're not that either anymore.
You are so disconnected from reality, it's hard to listen this patronizing tone :/
I'm surprised by Frank Luntz feeling he isn't listened to. Every time it seems when there's been of some thorny issue, a messaging battle in the last couple of decades, he's been there. And, understanding the right groups to win over. I hope the messaging debacle, though I understand it, has taught or chastened the Dems enough to employ the strategies of gurus like Drew Weston and Luntz!!
I hate to crush dreams to commenters, but neither Tristan Harris or Aza Raskin read comments from Castbox!!!
I just started listening to this podcast and it seems really interesting! I just have one comment about this one on gambling addiction, since I kept waiting for them to talk about the root of gambling or any other kind of addiction... this is central to solve this problem and any psychologist working in the area knows about this, so I was somewhat surprised there was no mention of this. Why do people start gambling in the first place (or other behaviours that end up in addiction)? And I am not talking about playing slots once a year on your bday or for a bachelor's party... Once people are addicted, it is extremely difficult to stop it (once an addict, always and addict!), but prevention of it is much easier to manage and implement. There are some genetic/hereditary propensities for addiction given the right conditions, but this is not always predictive. The clearer predictive of someone becoming an addict is linked to social and emotional relationships quality in one's life. And my gu
I think you dropped the ball on this one guys. I couldn't think of one thing McCaster said that China does, or Russia, that we do not do abroad ourselves, or here at home in America. Just because we're America, doesn't make our intent for nefarious things like media control in our own country and others, any better than China's.
This is Wert's lost tape from Over the Garden Wall.
amazing episode, so insightful. This kind of conversation should be had on national news
I am extremely impressed with this podcast. It's presentation was cogent and very well informed. Thank you! What's the plan for having government adopt Blockchain as a means to transparency?
what did you think of this?
She habitually drags out the final word or syllable of each clause, as though she thinks it accentuates her point. Don't inflect EVERYTHING.
This podcast changed my life. Ive felt 'wrong' about social media for some time and since disconnecting have found myself justifying 'why not' to my family and friends, and finding my 'why so?' to be wholly ineffective. Even to myself, it was hard yo educate and explain internally. I can now explain myself more clearly. I wont change my family's mind but i am now more informed (on both sides) and can more considered decisions. Ive shared this podcast with some colleagues and friends who are more open minded and already i see a change, and thats what matters. Its about awareness. I dont want to proselytise. Thank you for the passion, accessibility and transparency of a podcast like this. I truly hope we will look back on podcasts like this decades from now and see them as prophetic. I hope... The alternative doesn't bare thinking about.
The snaps get old.
@18:32: "True for them" is such an intellectually broken phrase that it contributes to the very problem being discussed. The violence of Jan. 6th was fueled by lies conveyed through a misappropriation of English. Muddled language has a reciprocal relationship with muddled thinking. How can we have accountability when words no longer have meaning? This is Trump's own defense, and that of Sidney Powell, and Rudy Giuliani, and Fox, and every depraved Reupblican attempting to hide their bigotry and malice in a fog of nonsense. Stop contributing to the problem. Start using words as if they have actual meanings.
Another well intentioned person holding forth about "truth" because it seems right to her, yet many of her strung-together conjectures are factually wrong. It reminds me of anti-scientific Socratic precepts. So little of what she said is empirically falsifiable, and many of her little factoids are in fact false. It undermines her credibility, and therefore her efficacy in promoting what might be useful approaches.
@8:58: A great point I'm very glad to hear someone make regarding the over-use of military terminology and metaphors.
"Clock rate" is an ambiguous, incomplete term Aza seems compelled to say. Use a better term. Neologisms don't make you sound smart, just pseudo-intellectual.
I could do without Aza's breathy, overly empathic, constant "Yeah."
This format for introducing the guest seems awkward and contrived... and a bit effusive or inflated. Also, Aza should pull the mic out of his mouth.