Rosanna Pansino Battles AI Slop to Reclaim the Internet – CNET

by Rachel Kim – Technology Editor

Rosanna Pansino, a baker known for her elaborate creations including a Star Wars Death Star cake and holographic chocolate bars, is now battling a new ingredient flooding the internet: AI-generated content. Pansino has launched a series dedicated to recreating popular AI-generated food videos in real life, a response to what she calls “AI slop” crowding out authentic creators on social media.

For years, Pansino’s feeds were filled with posts from fellow bakers and friends. Now, she says, they’re being overtaken by AI-produced clips, including a trend of objects being spread “satisfyingly” on toast. “The internet is flooded with AI slop and I wanted to find a way to fight back against it in a fun way,” Pansino told CNET.

Her approach involves meticulously recreating AI-generated visuals with real-world skill. One example involved a video of sour gummy Peach Rings being smeared on toast. While the AI version appeared simple, Pansino’s recreation required a complex process. She created peach-flavored butter rings, infused with oil and colored to match the candies, freezing and assembling them before coating them in a sugar-citric acid mixture to replicate the sour coating. The result, she says, was a perfect replica, demonstrating the labor involved in genuine creation.

The proliferation of “AI slop” – low-quality, often nonsensical content generated by artificial intelligence – is a growing concern online. A recent CNET study found that 94% of U.S. Adults who use social media believe they encounter AI-generated content while scrolling, but only 11% find it entertaining, useful, or informative. The ease and low cost of AI content creation, powered by tools like OpenAI’s Sora, Google’s Nano Banana, and Meta AI, are driving the surge.

Experts have voiced concerns about the broader impact of AI, including its effects on the environment, the economy, and the spread of misinformation. The influx of AI-generated content is displacing human creators, artists, and writers whose operate originally fueled the internet.

Pansino isn’t alone in pushing back. Jeremy Carrasco, who uses the online handle @showtoolsai, debunks viral AI videos, pointing out telltale signs of manipulation like jump cuts and continuity errors. He highlights the emotional manipulation inherent in “slop,” designed to generate engagement and, ad revenue. Kapwing, a video tool maker, estimates that top “slop” accounts are earning millions of dollars annually through advertising.

Platforms are attempting to address the issue, with LinkedIn reporting some success in verifying user accounts. While, the rapid evolution of AI makes it difficult to keep pace. Engagement pods – groups using AI-powered automation to artificially inflate engagement – are proving difficult to identify and remove, according to Oscar Rodriguez, vice president of trust products at LinkedIn.

Efforts to combat AI slop include labeling and watermarking content, with the Coalition for Content Provenance and Authenticity working to standardize these practices. Researchers are also exploring innovative solutions, such as embedding watermarks in light itself, a technique developed at Cornell University that could make manipulation more difficult to conceal.

The publishing industry is also grappling with AI-generated content, facing challenges from chatbots and AI-powered translation tools. The prepublication database arXiv has seen a surge in submissions, prompting tighter submission guidelines and increased reliance on volunteer reviewers to identify potentially fraudulent or AI-generated studies. A research team at Queensland University of Technology has even developed a machine learning tool to detect fake research papers, a response to the rise of “paper mills” that generate and sell fraudulent academic content.

While some tech companies are introducing tools to address the problem, others are integrating AI directly into their platforms, creating a conflict of interest. DiVine, a new AI-free social media app funded in part by Twitter co-founder Jack Dorsey, aims to provide a haven for authentic content, utilizing verification systems to prevent the spread of AI-generated videos.

The rise of AI-generated content also extends to the political sphere, with the emergence of “slopaganda” – AI-generated content designed to manipulate beliefs. A Stanford University study found that people struggle to identify AI-generated political messages, and those messages are as persuasive as those written by humans. The potential for misuse is significant, with concerns about the spread of misinformation and the erosion of trust. Legislative efforts to regulate AI are fragmented, with state and federal governments pursuing different approaches, and tech companies lobbying to minimize restrictions.

Perhaps the most insidious form of AI slop is the creation of deepfakes – realistic but fabricated images and videos. The ease with which deepfakes can be created, and the potential for misuse, including nonconsensual intimate imagery, are raising serious concerns. The 2025 Grab It Down Act attempts to address this issue, but enforcement remains a challenge.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.