DiscoverModerated Content
Moderated Content
Claim Ownership

Moderated Content

Author: evelyn douek

Subscribed: 51Played: 1,429
Share

Description

Moderated Content from Stanford Law School is podcast content about content moderation, moderated by assistant professor Evelyn Douek. The community standards of this podcast prohibit anything except the wonkiest conversations about the regulation—both public and private—of what you see, hear and do online.
73 Episodes
Reverse
Stanford’s Evelyn Douek and Alex Stamos are joined by University of Washington professor Kate Starbird to discuss research on election rumors.Kate Starbird is an associate professor at the University of Washington in the Department of Human Centered Design & Engineering where she is also a co-founder of the Center for an Informed Public. - University of WashingtonHouse Judiciary Committee Kate Starbird interview transcriptHouse Judiciary Committee Alex Stamos interview transcriptSports CornerNoted American sports expert Evelyn Douek discusses the NCAA women’s basketball championship in this slam dunk segment. Dawn Staley’s South Carolina Gamecocks defeated superstar Caitlin Clark’s Iowa Hawkeyes 87-75 on Sunday in what is expected to be the most watched women’s basketball game of all time with an average ticket price hovering around $500. - Jill Martin/ CNN, Alexa Philippou/ ESPNJoin the conversation and connect with Evelyn and Alex on your favorite social media platform that doesn’t start with “X.”Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
SHOW NOTESStanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:X this week had its lawsuit against the Center for Countering Digital Hate thrown out by a Californian district court. It’s a good and important win for free speech. - Emma Roth / The VergeA Kremlin-linked group was spreading divisive stories about Kate Middleton as online rumors swirled about her whereabouts. Why? - Mark Lander and Adam Satariano / The New York Times In the aftermath of the collapse of Baltimore's Francis Scott Key Bridge, the destruction of X as a platform for useful information about breaking news was all too clear. - A.W. Ohlheiser / Vox Meta is shutting down its transparency tool, CrowdTangle. Brandon Silverman joins to talk about the tool and what this means for the future of platform transparency. - Vittoria Elliott / WiredBrandon’s substack is Some Good TroubleA group of civil society organizations and researchers wrote an open letter objecting to Meta’s decision - MozillaGW’s tracker of Platform Transparency Tools & The Brussels EffectJoin the conversation and connect with Evelyn and Alex on your favorite social media platform that doesn’t start with “X.”Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek is joined by Professor Genevieve Lakier of the University of Chicago Law School to discuss the Supreme Court oral arguments in Murthy v. Missouri. For one of their previous conversations on this topic, listen to this episode from September last year talking about the 5th Circuit’s decision in the case.They also discuss Stanford’s amicus brief in the case, and the Stanford Internet Observatory’s blog post summarizing factual errors that have pervaded the case.Join the conversation and connect with Evelyn and Alex on your favorite social media platform that doesn’t start with “X.”Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Alex and Evelyn discuss the latest bill to ban TikTok and its many flaws; the Gemini image-generation public relations crisis; Apple's fight-picking in Europe; and Texas and Florida's latest great attempts to regulate online speech.
Alex and Evelyn are joined by Moderated Content's Supreme Court correspondent Daphne Keller to talk about the oral argument in the NetChoice cases this week and what the Supreme Court justices seem to be thinking about whether and how states can regulate internet platforms.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:Is the deepfake apocalypse finally here? Alex and Evelyn discuss the recent robocalls impersonating President Biden ahead of the New Hampshire primary and sexually explicit fake images of Taylor Swift that spread on X, resulting in the platform blocking searches for one of the most famous people in the world.Let’s Get MetaMeta will start labeling AI-generated images on Facebook, Instagram and Threads. The company is working with other technology and media companies to develop standards for identifying and labeling AI generated content, but will that be effective?In other democracy saving announcements by Meta, Threads will not “proactively recommend political content from accounts you don't follow.” Good thing they disclose what political content means… oh wait.Also in full transparency, Meta removed the Facebook and Instagram accounts of Iran’s Supreme Leader, Ayatollah Ali Khamenei, with little explanation of the decision which comes months after the October 7 attack by Hamas on Israel.X/Twitter CornerMeanwhile, X is selling checkmarks to terrorists and failing to remove Chinese influence operations.In Full TransparencyTikTok is restricting searches in its Creative Center tool, used to track hashtag trends and popularity. The change comes after the tool was used to scrutinize content related to the Israel-Hamas war. The data was never that great, but this is a loss for everyone.Don’t worry, the Digital Service Act comes into full force this weekend with transparency requirements, and it’s definitely fully sorted out without legal challenges and with EU country regulators ready to enforce.Legal CornerA federal judge blocked an Ohio law requiring parental consent law from going into effect shortly after technology trade association NetChoice filed a challenge.The Kids Online Safety Act was updated and now has a filibuster-proof majority of 62 co-sponsors. The bill could pass the Senate this year, but still faces long odds in the House where there is dysfunction and no companion legislation. Fewer legislation gets passed in an election year, and opponents say the updates amount only to a new coat of paint with the same structural issues in potential violation of the First Amendment.Join the conversation and connect with Evelyn and Alex on your favorite social media platform that doesn’t start with “X.”Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos talk to Riana Pfefferkorn and David Thiel of the Stanford Internet Observatory about the technical and legal challenges of addressing computer-generated child sexual abuse material. They mention: Riana’s new paper on the topic, “Addressing Computer-Generated Child Sex Abuse Imagery: Legal Framework and Policy Implications” - Riana Pfefferkorn / LawfareDavid’s report documenting Child Sexual Abuse Material in a major dataset used to train AI models - David Thiel / SIO; Samantha Cole / 404 MediaModerated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos talk about the Senate Judiciary Committee hearing with Tech CEOs about “Big Tech and the Online Child Sexual Exploitation Crisis.” They mention: The Stanford Internet Observatory’s work on Self-Generated CSAM - David Thiel, Renée DiResta and Alex Stamos / SIOThe REPORT Act - Riana Pfefferkorn / Tech Policy PressModerated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos are joined by Casey Newton of Platformer and Hard Fork to talk about his decision to move his newsletter off of Substack. Casey explains his decision here: Why Platformer is leaving SubstackAnd talks about it on his podcast here: Why Casey Left SubstackModerated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
 Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:Stanford Internet Observatory’s David Thiel wrote a report documenting Child Sexual Abuse Material in a major dataset used to train AI models - David Thiel / SIO; Samantha Cole / 404 MediaLots of DSA news from the EU:Three new platforms have been designated Very Large Online Platforms – how did the adult sites get overlooked before? Woops! Jon Porter / The VergeThe Commission has announced a formal investigation into X - Martin Husovec / DSA NewsletterResearchers have reason to doubt the information platforms are submitting to the DSA Transparency Database - Amaury Trujillo, Tiziano Fagni, Stefano Cresci / arXivContent moderation controversies around the Israel/Gaza conflict continue.The Meta Oversight Board released its “expedited” decisions on the topic - Oversight BoardHuman Rights Watch released a report alleging suppression of pro-Palestinian content by the company - Human Rights WatchSubstack has a Nazi Problem - Ken White / The Popehat ReportThe Netchoice Restatement of the Law continues to expand, with the trade group bringing a challenge to the Utah Social Media Law - Hannah Schoenbaum / APJoin the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Alex and Evelyn discuss US military information operations, Threads testing ActivityPub integration, ridiculous statistics about TikTok, YouTube Magic Dust, the Meta Oversight Board moving with all deliberate speed, and First Amendment retaliation claims.
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:Elon Musk told advertisers to go f*** themselves in an interview with Jona–... sorry, Andrew Ross Sorkin of the NYT. Is this a good business strategy? - Kate Conger and Remy Tumin / The New York Times Linda doing clean-up on Aisle Elon - Linda Yaccarino / XMeta is still algorithmically promoting child sexual abuse material on its platforms. - Jeff Horwitz and Katherine Blunt / WSJThey say they’re still working on it: MetaOn the flip side, Google’s risk-averse approach to CSAM and its poor customer service creates a different problem for people who suddenly find themselves locked out of their entire accounts. - Kashmir Hill / New York TimesMeta says it is adopting the same approach as in the past for the 2024 election season. - Nick Clegg / MetaExcept this time, the government apparently will not be giving them any tip-offs about foreign interference. Such communication has been stalled since july. - Naomi Nix and Cat Zakrzewski / The Washington PostAs Meta detailed in its quarterly adversarial threat report, though, this is not because such interference has stopped. - MetaA district court issued a preliminary injunction preventing Montana’s state-wide ban from going into effect in the new year. - Sapna Maheshwari / New York Times; US District CourtDoritos has had the most important AI breakthrough of the year, with its crunch-cancellation software for gamers who like to snack. - Sydney Page / The Washington PostJoin the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:In one of the most surprising (and rapidly developing) tech stories of the year, Sam Altman was ousted as CEO of OpenAI. The reasons are still unclear, and the story still changing as we were recording. But at least partially the story is about AI safety, and what it means to pursue responsible development of AI - Karen Hao and Charlie Warzel / The AtlanticMeta is advocating for online safety legislation that requires parental approval for children under 16 to download apps, shifting the burden to app stores for age verification and parental controls. - Sarah Perez/ TechCrunch, Cristiano Lima, Naomi Nix/ The Washington Post, Antigone Davis/ MetaMeta announced it is opening up its Content Library and API more broadly - Nick Clegg / MetaEverything is content moderation, and India is the most important jurisdiction for the future of online free speech, streaming platform edition, with Netflix and Amazon Prime self-censoring the content they serve in the country - Gerry Shih and Anant Gupt / The Washington PostOsama bin Laden’s Letter to America on TikTok didn’t seem to go viral until the media drew attention to them. Would be nice to know for sure though! - Drew Harwell and Victoria Bisset / The Washington Post, Scott Nover / SlateMusk launches a ridiculous lawsuit against Media Matters for reporting that Musk doesn’t like but admits is true. That’s not surprising at this point. But more surprising, and scary, is the State AGs who are willing to go along with it and have announced their own investigations. - Adi Robertson / The VergeJoin the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:Alex participated in the fifth Senate AI Insight Forum focused on AI and its impact on elections and democracy. It turns out politicians can be reasonable and bipartisan when the cameras are off. - Oma Seddiq/ Bloomberg Law, Gabby Miller/ Tech Policy Press, Cristiano Lima/ The Washington Post, Christopher Hutton/ Washington Examiner, Office of Majority Leader Chuck SchumerLabel Your AIMeta will require political advertisers to disclose if content has been digitally altered to make content potentially misleading. - Aisha Counts/ Bloomberg News, Katie Paul/ Reuters, Will Henshall/ Time, FacebookMeta will also let political ads on Facebook and Instagram question the legitimacy of the 2020 U.S. presidential election. - Salvador Rodriguez/ The Wall Street Journal Microsoft announced a free tool for politicians and campaigns to authenticate media with watermark credentials. - Margi Murphy/ Bloomberg News, Brad Smith/ MicrosoftYouTube will require creators to disclose realistic AI-generated content with new labels. Users can also request to remove manipulated video “that simulates an identifiable individual, including their face or voice.” - Olafimihan Oshin/ The Hill, Jennifer Flannery O'Connor, Emily Moxley/ YouTubeTikTok Tick TockThere’s been a burst of new calls to ban TikTok over allegations that it is boosting anti-Israel and pro-Hamas content. - Alexander Bolton/ The Hill, Cecilia Kang, Sapna Maheshwari/ The New York TimesTikTok denies these allegations and faults inaccurate news reporting. - TikTokVerified transparency about this would be good, but there’s no real evidence for the claim. There may be a conflation of “pro-Palestinian” and “pro-Hamas” content. Many people have pro-Palestinian views, especially TikTok’s young userbase. It also turns out that other platforms have similarly prevalent content. - Drew Harwell/ The Washington PostThe renewed calls for TikTok to be banned because of content on it that lawmakers don’t like gives the lie to the argument that calls for a ban are not about speech, which is... a First Amendment problem.Nepal, however, doesn’t have a First Amendment so it banned TikTok citing disruption to “social harmony” including “family structures” and “social relations” - Niha Masih, Sangam Prasai/ The Washington PostA Trip to IndiaNothing massively new here, but worth highlighting this WaPo report: “For years, a committee of executives from U.S. technology companies and Indian officials convened every two weeks in a government office to negotiate what could — and could not — be said on Twitter, Facebook and YouTube.” - Karishma Mehrotra, Joseph Menn/ The Washington Post Meanwhile, Apple has been notifying opposition politicians in India that they are “being targeted by state-sponsored attackers.” - Meryl Sebastian/ BBC NewsTransparency PleaseThe first batch of DSA transparency reports have been submitted and Tech Policy Press is tracking. - Gabby Miller/ Tech Policy PressThe unsurprising news is that X is devoting far fewer resources to content moderation than its peers. Shocker! - Foo Yun Chee, Supantha Mukherjee/ Reuters“X's 2,294 EU content moderators compared with 16,974 at Google's YouTube, 7,319 at Google Play and 6,125 at TikTok.”Legal CornerThe Supreme Court struggled with two cases about when public officials can block critics online. Much of the debate came down to whether there is a difference between personal and official social media accounts. - Josh Gerstein/ Politico Pro, John Kruzel, Andrew Chung/ Reuters, Ian Millhiser/ Vox, Ann E. Marimow/ The Washington PostOverall, the Court sounded sympathetic to the claim that they shouldn’t be able to block people whenever they please, but much less clear on what the test should be.Sports CornerIs there a Big Game in California this weekend? Alex has a lot to say for someone rooting for the team with a losing record in the 126-year series.Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:President Joe Biden signed an Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence today. The sweeping EO includes standards setting for generative AI watermarking and red teaming. It will also set rules to mitigate privacy and bias risks before AI systems can be used by federal officials. - Maria Curi, Ashley Gold/ Axios, Mohar Chatterjee, Rebecca Kern/ Politico, Mohar Chatterjee/ Politico, John D. McKinnon, Sabrina Siddiqui, Dustin Volz/ The Wall Street Journal, Cat Zakrzewski, Cristiano Lima/ The Washington PostThe EO is a good step forward, but the measures are limited in power without congressional action.App store rules are restricting access to some Hamas-affiliated channels on Telegram where content moderation action is rare, allowing terrorist organizations to share messaging. The restrictions are inconsistent, with some channels only blocked on the Google Play store app in some cases. - Clare Duffy, Brian Fung/ CNN, Kevin Collier/ NBC News, Wes Davis/ The VergeIt’s another reminder of the power of content moderation rules in the stack — at the infrastructure or distributor level, like app stores.X-Twitter CornerIt’s been one year since Elon Musk flipped the bird (and struggled to carry a sink into Twitter’s San Francisco headquarters). Our original episode on this, “Musk Flips the Bird,” held up pretty well — especially the prediction that this would be very good news for Mark Zuckerberg.Legal CornerIt’s not all good news for Zuck though. The state attorneys general of 41 states and D.C. sued Meta, alleging Instagram and Facebook harm kids with addictive features and privacy violations. - Barbara Ortutay/ Associated Press, Lauren Feiner/ CNBC, Rebecca Kern/ Politico, Cecilia Kang, Natasha Singer/ The New York Times, Cristiano Lima, Naomi Nix/ The Washington Post, Daphne Keller/ @daphnehkThis is a relatively novel legal argument, and it appears to be an uphill battle to sue for design harms and not content. Still, the alleged privacy violations could hold up and the political posturing alone may prove to be a winner in the multi-pronged legal, policy, and regulatory battle.The king got involved and we can’t ignore the UK Online Safety Bill Act anymore. The legislation received royal assent, becoming law last week. - Imran Rahman-Jones, Chris Vallance/ BBC News, Jon Porter/ The Verge, Peter Guest/ WiredAlex and Stanford Internet Observatory graduate researcher Sara Shah published a guide on trust and safety issues in the Fediverse with tips for running a Mastodon instance.Join the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to subscribe and share the podcast with friends!
Stanford’s Evelyn Douek and Alex Stamos weigh in on the latest online trust and safety news and developments:Marc Andreessen, the co-founder of venture capitalism firm Andreessen Horowitz and the Netscape web browser, wrote a lengthy blog post with an ode to technology. He also manages to declare trust and safety “the enemy” in the rambling screed of more than 5,000 words. - Dan Primack/ Axios, Marc Andreessen/ Andreessen HorowitzHave you “properly glorified” technology today?Moderating the WarMeta got a headline you never want in 404 Media: “Instagram ‘Sincerely Apologizes’ For Inserting ‘Terrorist’ Into Palestinian Bio Translations.” - Samantha Cole/ 404 MediaBut don’t worry, Meta said it is “sorry” for “inappropriate Arabic translations.” - Liv McMahon, Joe Tidy/ BBC News The Wall Street Journal had a big story on the tensions and challenges within Meta over moderation of speech in Palestinian territories. - Sam Schechner, Jeff Horwitz, Newley Purnell/ The Wall Street JournalThe jawboning continues: The European Commission issued formal requests for information to Meta and TikTok about how the social media companies are removing illegal content and curbing disinformation during the Israel-Hamas war to comply with the Digital Services Act. - Kelvin Chan/ Associated Press, Clothilde Goujard/ Politico, Charlotte Van Campenhout, Bart H. Meijer/ Reuters, Natasha Lomas/ TechCrunch, Emma Roth/ The VergeDozens of civil society organizations sent a letter to European Commissioner Thierry Breton alleging a misunderstanding of key components of the Digital Services Act (DSA) in letters sent to major social media companies about how they are handing information related to the Israel-Hamas war. - Clothilde Goujard/ Politico ProLegal CornerSpeaking of jawboning, the Supreme Court will hear a jawboning case out of the Fifth Circuit which ruled that a broad swath of the Biden administration violated the First Amendment in their engagement with social media platforms. - Lawrence Hurley/ NBC News, Julia Shapero/ The Hill, Adam Liptake/ The New York Times, Supreme Court (.pdf)Go deeper with our previous discussions on this case with University of Chicago Law professor Genevieve Lakier:“The 5th Circuit's Jawboning Ruling”“Government, Platform Communication, Jawboning, and the First Amendment”Threads is still working out what it wants to be and says suppression of search terms on controversial news topics. - Sarah Perez/ TechCrunchJoin the conversation and connect with Evelyn and Alex on Twitter at @evelyndouek and @alexstamos.Moderated Content is produced in partnership by Stanford Law School and the Cyber Policy Center. Special thanks to John Perrino for research and editorial assistance.Like what you heard? Don’t forget to  subscribe and share the podcast with friends!
Alex and Evelyn talk to Brian Fishman, the former Policy Director for counterterrorism and dangerous organizations at Facebook/Meta, about the history of terrorism online, the challenges for platforms moderating terrorism, and the bad incentives created by misguided political pressure (looking at you, EU).
Alex and Evelyn discuss how the horrific events in Israel over the weekend make clear how important social media is during fast-moving historical events, and how X/Twitter has fundamentally degraded as a source of information. They also discuss China's ramped up crack down on app stores, and the Supreme Court's cert grant in the Netchoice cases, that could reshape the internet.
MC LIVE 9/28

MC LIVE 9/28

2023-10-0201:00:26

Alex and Evelyn record an episode in front of probably their entire active listener base. They talk about an update on SIO's investigations into child sexual abuse material on platforms; the fight for free speech in India; the poor outlook for election integrity at X in 2024, and what this might mean for other platforms; platform transparency mandates with Daphne Keller; and challenges to age verification laws with Alison Boden, the Executive Director of the Free Speech Coalition.
Alex and Evelyn discuss reporting on a proposed deal between TikTok and the US government for it to continue to operate in the country, and the broader geopolitical context of US-China relations; how to think about search-term blocking; YouTube preventing Russell Brand from monetizing his videos on its platform; the Musk stories from the week that matter; and the enjoining of the California Age Appropriate Design Code by a California judge.
loading
Comments 
Download from Google Play
Download from App Store