DiscoverThe 404 Media PodcastGoogle Somehow Got Even More Shit
Google Somehow Got Even More Shit

Google Somehow Got Even More Shit

Update: 2024-05-29
Share

Digest

This episode of the 404 Media Podcast discusses the recent decline of Google search and the introduction of AI-powered features, specifically focusing on the issue of AI-generated misinformation. The hosts discuss how Google's AI overviews are often providing inaccurate and even harmful information, citing examples like suggesting users add glue to their pizza sauce. They also delve into Google's financial relationship with Reddit, where Google pays $60 million annually to scrape Reddit data for its AI models, leading to the surfacing of low-quality content. The episode explores the potential for users to intentionally poison data to combat the spread of misinformation. The hosts also discuss a paper by Google researchers that found AI image generators are now the primary source of image-based disinformation online. The episode concludes with a discussion about the potential for a post-Google web, where users may need to find alternative search engines due to Google's reliance on AI.

Outlines

00:00:00
Introduction

This Chapter introduces the 404 Media Podcast and its hosts, Joseph, Sam Cole, Emmanuel Mayberg, and Jason Kebler. They mention the YouTube version of the show and the topic of the episode: Google's decline and the introduction of AI-powered features.

00:01:32
Google's AI Misinformation

This Chapter delves into Google's new AI-powered search results, which often provide inaccurate and even harmful information. The hosts discuss examples like Google suggesting users add glue to their pizza sauce to prevent cheese from sliding off. They also explore the issue of Google scraping Reddit data for its AI models, leading to the surfacing of low-quality content.

00:34:23
Google Researchers on AI Disinformation

This Chapter discusses a paper by Google researchers that found AI image generators are now the primary source of image-based disinformation online. The hosts discuss the implications of this finding and the potential for AI to exacerbate the spread of misinformation.

00:42:10
The Future of Search

This Chapter explores the potential for a post-Google web, where users may need to find alternative search engines due to Google's reliance on AI. The hosts discuss the challenges of finding reliable information online and the need for users to be critical of AI-generated content.

Keywords

Google


Google is an American multinational technology company that specializes in Internet-related services and products, which include online advertising technologies, a search engine, cloud computing, software, and hardware. It is one of the Big Five companies in the American technology sector, alongside Amazon, Apple, Meta (formerly Facebook), and Microsoft.

AI


AI stands for Artificial Intelligence, which is the simulation of human intelligence processes by computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.

Misinformation


Misinformation refers to false or inaccurate information that is spread unintentionally. It can be spread through various channels, including social media, news outlets, and personal communication. Misinformation can have harmful consequences, such as influencing public opinion, undermining trust in institutions, and leading to harmful actions.

Reddit


Reddit is an American social news aggregation, web content rating, and discussion website. Registered users submit content to the site such as links, text posts, images, and videos, which are then voted up or down by other users. Content that receives a sufficient number of upvotes appears on the site's front page.

DeepMind


DeepMind is a British artificial intelligence company founded in 2010. It was acquired by Google in 2014. DeepMind is known for its work on artificial general intelligence, particularly its development of AlphaGo, a computer program that defeated a professional Go player.

Generative AI


Generative AI refers to a type of artificial intelligence that can create new content, such as text, images, audio, and video. Generative AI models are trained on large datasets of existing content and can then generate new content that is similar to the training data. Examples of generative AI models include ChatGPT, DALL-E 2, and Stable Diffusion.

Disinformation


Disinformation refers to false or inaccurate information that is spread intentionally. It is often used to manipulate public opinion, undermine trust in institutions, or sow discord. Disinformation can be spread through various channels, including social media, news outlets, and personal communication.

Image Generator


An image generator is a type of artificial intelligence that can create new images from text descriptions. Image generators are trained on large datasets of images and text descriptions and can then generate new images that are similar to the training data. Examples of image generators include DALL-E 2, Stable Diffusion, and Midjourney.

Search Engine


A search engine is a software system that is designed to search for information on the World Wide Web. Search engines use algorithms to crawl the web, index web pages, and return results that match a user's search query. Examples of search engines include Google, Bing, and DuckDuckGo.

Q&A

  • What are some examples of inaccurate information provided by Google's AI overviews?

    One example is when a user searched for "cheese not sticking to pizza" and the AI overview suggested adding glue to the sauce. Another example is when the AI overview suggested smoking two to three cigarettes per day during pregnancy.

  • How is Google using Reddit data for its AI models?

    Google is paying Reddit $60 million annually to scrape Reddit data and use it to train its large language models. This means that Google is using Reddit content to improve its AI's ability to generate text and answer questions.

  • What is the potential for users to poison data to combat misinformation?

    Some users are considering intentionally polluting data by posting misleading or inaccurate information on platforms like Reddit and Tumblr. This is an attempt to disrupt the AI's ability to learn from and generate accurate information.

  • What did Google researchers find about AI image generators?

    Google researchers found that AI image generators are now the primary source of image-based disinformation online. This means that most of the fake or misleading images being spread online are now being created by AI.

  • What are some potential challenges of a post-Google web?

    One challenge is finding reliable information online, as AI-generated content can be difficult to distinguish from human-generated content. Another challenge is the need for users to be critical of AI-generated content and to verify information from multiple sources.

  • What is Google's response to the criticism of its AI-powered search results?

    Google has stated that the AI overviews are designed to provide quick and easy answers to user queries. However, they have also acknowledged that the AI is still under development and that there are some issues with accuracy and reliability.

  • What is the potential impact of AI on the spread of misinformation?

    AI has the potential to both exacerbate and mitigate the spread of misinformation. On the one hand, AI can be used to generate and spread false information at scale. On the other hand, AI can also be used to detect and combat misinformation.

  • What are some potential solutions to the problem of AI-generated misinformation?

    Some potential solutions include developing better AI models that are less prone to generating misinformation, creating tools to help users identify AI-generated content, and promoting media literacy to help users critically evaluate information.

  • What is the future of search?

    The future of search is uncertain, but it is likely that AI will play an increasingly important role. It is possible that users will need to find alternative search engines or develop new strategies for finding reliable information online.

  • What is the role of human oversight in the development and deployment of AI?

    Human oversight is essential for ensuring that AI is developed and deployed responsibly. This includes ensuring that AI models are trained on accurate data, that AI systems are designed to be fair and unbiased, and that AI is used for ethical purposes.

Show Notes

We revisit our Why Google Is Shit Now episode with something of an update: Google got worse! Of course we're talking about the embarrassing rollout of Google's AI-powered answers. Jason explains the supply chain of data, and especially how it comes from Reddit posts. After the break, Emanuel continues the Google theme with a Google researcher written study which shows that AI is now a leading disinformation vector. In the subscribers-only section, we talk about Sam's and Emaunel's piece on AI-generated CSAM not being a victimless crime.

Subscribe at 404media.co for bonus content.

Learn more about your ad choices. Visit megaphone.fm/adchoices

Comments 
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Google Somehow Got Even More Shit

Google Somehow Got Even More Shit

404 Media