Rosanna Pansino has been sharing her baking creations with the internet for over 15 years, hoping to delight and inspire with fun creations that include a Star Wars Death Star cake and holographic chocolate bars. But in her latest series, she has a new goal: “Kick AI’s butt.”
Blame it on the AI slop overwhelming her social media feeds. Pansino used to see posts from real bakers and friends; now, they’re being crowded out by AI-generated clips. There’s a whole genre of slop videos that feature food, including a bizarre trend of unlikely objects being spread “satisfyingly” on toast.
She decided to do something about it. She would put her years of skill side-by-side with AI to recreate these slop videos in real life.
For instance: a pile of sour gummy Peach Rings, effortlessly smeared on toast. The AI video looked simple enough, but Pansino needed to create something entirely new. She used butter as her base, infused with peach-flavored oil. Yellow and orange food coloring gave it the right pastel hues. She carefully piped the butter into rings using a silicone mold. After they hardened in the freezer, she used uncolored butter to glue two rings together in the right 3D shape. The final touch was to dunk them in a mixture of sugar and citric acid for that sour candy look and taste.
It worked. The butter rings were perfect replicas of real candy rings, and Pansino’s video paralleled the AI version exactly, with the rings smoothly gliding across the toast. Most importantly, she had done what she set out to do.
“The internet is flooded with AI slop, and I wanted to find a way to fight back against it in a fun way,” Pansino tells me.
It’s a rare victory for humans as AI-generated slop inundates an online world that had, once upon a time, been built by humans for humans.
AI technology has been working behind the scenes on the internet for years, often in unnoticeable ways. Then, a few years ago, generative AI burst onto the scene, launching a transformation that has unfolded at breakneck speed. With it came a flood of AI slop, a term given to particularly lukewarm AI-generated text, images and videos that are inescapable online, from search engines to publishing and social media.
“AI slop” is a shabby imitation of content, often a pointless, careless regurgitation of existing information. It’s error-prone, with summaries proudly proclaiming made-up facts and papers citing fake credentials. Images tend to have a slick, plastic veneer, while brainrot videos struggle to obey basic laws of physics. Think fake bunnies on trampolines and AI Overviews advising you to put glue on pizza.
The vast majority of US adults who use social media (94%) believe they see AI-generated content when scrolling, a new CNET study found. Only 11% found it entertaining, useful or informative.
Slop happens because AI makes it quicker, easier and cheaper than ever to create content at an unimaginable scale. OpenAI’s Sora, Google’s Nano Banana and Meta AI create videos, images and text with a few clicks of a button.
Experts have loudly voiced concerns about AI’s impact on the environment, the economy, the workforce, misinformation, children and other vulnerable folks. They’ve cited its ability to further bias, supercharge scams and harm human creativity, but nothing has slowed down the rapid adoption and scaling of AI. It’s overtaking the human creators, artists and writers whose work fuels the very existence of these models.
AI slop is an oil spill in our digital oceans, but there are a lot of people working to clean it up. Many are fighting for better ways to identify and label AI content, from memes to deepfakes. Creators are pushing for better media literacy and changing how we consume media. Publishers, scientists and researchers are testing new strategies to keep bad information from gaining traction and credibility. Developers are building havens from slop with AI-free online spaces. Legislation and regulation, or the lack of it, play a role in each potential solution.
We won’t ever be completely rid of AI, but all these efforts are bringing some humanity back to the internet. Pansino’s recreations of AI videos highlight the painstakingly detailed hard work that goes into creation, way more than typing a prompt and clicking generate.
“Human creativity is one of the most important things we have in the world,” says Pansino. “And if AI drowns that out, what do we have left?”
Creators who push back: ‘AI could never’
The internet was built on videos such as Charlie Bit My Finger, Grumpy Cat and the Evolution of Dance. Now, we have videos of AI-generated cats forming a feline tower and “Feel the AGI” memes. These innocuous AI posts are why some people on social media see slop as entertainment or a new kind of internet culture. Even when videos are very obviously AI, people don’t always mind if they’re perceived as harmless fun. But slop is never benign.
You see slop because it’s being forced upon you — not because you’ve indicated to the algorithms that you love it. If you were to sign up for a new YouTube account today, a third of the first 500 YouTube Shorts shown to you would be some form of AI slop content, according to a report from Kapwing, a maker of online video tools. There are over 1.3 billion videos labeled as AI-generated on TikTok as of February. Slop is baked into our scrolling the same way microplastics are a default ingredient in our food.
Pansino compares her experience recreating AI food slop videos to an episode of The Office. In it, Dwight is competing with the company’s new website to see if he can make more sales.
“Dwight, single-handedly, is outselling the website — he’s competing against the machine,” Pansino says. “That’s what I feel like when I’m baking against AI. It’s a nice rush.”
(The Office fans may recall that Dwight wins at the end of the episode, and later, because of massive errors and fraud, the site’s creator, Ryan, is fired.)
Her 21 million-plus followers across YouTube, Instagram and TikTok have cheered on her AI recreation series, which Pansino attributes to their own frustrations with seeing slop on their feeds. Plus, her creations are actually edible.
“We’re getting dimensions that AI could never,” she says.
Other creators have emerged as “reality checkers.” Jeremy Carrasco (@showtoolsai) uses his background as a technical video producer to debunk viral AI videos. His team would livestream events for businesses, working to avoid errors, which has helped him more easily spot when AI erroneously mimics video qualities such as lens flares. His educational videos help his more than 870,000 Instagram, YouTube and TikTok followers recognize these abnormalities.
Analyzing a video’s context, Carrasco points out telltale signs of generative AI such as weird jump cuts and continuity issues. He also finds the first time a video was shared by a real person or a slop account. Everyone can do this, but it’s hard when you’re being “emotionally baited” by slop, Carrasco says.
“Most people aren’t spending their time analyzing videos like I am. So if it hits their subconscious [signaling], ‘This looks real,’ their brain might shut off there,” Carrasco says.
Slop producers don’t want you to second-guess what you’re seeing. They want you to get emotional — whether that’s delighted by bunnies on a trampoline or outraged by political memes — and to argue in the comments and share the videos with your friends. The goal for many producers of AI slop is engagement and, therefore, monetization. The Kapwing report estimates the top slop accounts are pulling in millions of dollars of ad income per year. They’re just like the original engagement farmers and ragebaiters on Twitter. What’s old is now AI-powered.
Seeing isn’t believing. What now?
It can be difficult for the online platforms we rely on to identify AI images and videos. To weed out the worst offenders, the accounts that mass-produce sloppy spam, some platforms encourage their real users to add verifications to their accounts. LinkedIn has had some success here, with over 100 million of its members adding these new verifications. But AI makes it hard to keep up.
People are using AI-powered community automation tools to make AI-generated posts and leave comments across hundreds of random accounts in a fraction of the time it would take to do so manually. Groups of these users are called engagement pods, Oscar Rodriguez, vice president of trust products at LinkedIn, tells me. The company has removed “hundreds of LinkedIn groups” that display these engagement-farming behaviors in just the past few months, but identifying them is hard.
“There is no one signal that I can tell you that definitely makes [an account] inauthentic or fake, but it’s a combination of different signals, the behavior of the accounts,” says Rodriguez.
Take AI-generated images, for example. Many people use AI to create new headshots to avoid paying for costly photoshoots, and it’s not against LinkedIn’s rules to use them as profile pictures. So an AI headshot alone isn’t enough to warrant suspicion. But if an account has an AI profile photo and has other warning signs — like commenting more frequently than LinkedIn internally knows is typical for human users — that raises red flags, Rodriguez says.
To spot AI content, platforms rely on labeling and watermarking. Labeling requires people to disclose that their work was made with AI. If you don’t, monitoring systems can attempt to flag it themselves. One of the strongest signals these systems rely on is watermarks, which are invisible signatures applied during content creation and hidden in a piece of content’s metadata. They give you more information about how and when something was created.
Most watermarking techniques focus on two areas: hardware companies authenticating real content as it’s captured, and AI companies embedding signals into their synthetic, AI-generated media when it’s created. The Coalition for Content Provenance and Authenticity is a major advocacy group trying to standardize how synthetic media is watermarked with content credentials.
Many, but not all, AI models are compatible with the C2PA’s framework. That means its verification tool can’t flag every piece of AI-generated media, which creates inconsistency and confusion. Half of US social media users (51%) want better labeling, CNET found. That’s why other solutions are in the works to fill the gaps.
Abe Davis, a computer science professor at Cornell University, led a team that developed a way to embed watermarks in light. All that’s needed is to turn on a lamp that uses the necessary chip to run the code. This process is called noise-coded illumination. Any camera that captures video footage of an event where the light is shining will automatically add the watermark.
“Instead of applying the watermark to data that’s captured by a specific camera, [noise-coded illumination] applies it to the light environment. Any camera that’s recording that light is going to record the watermark,” Davis says.
The watermark is hidden in the light’s frequencies, spread across a video, undetectable to the human eye and difficult to remove. Those with the secret code can decode the watermark and see what parts of a video or image have been manipulated, down to the pixel level. This would be especially helpful for live events, like political rallies and press conferences, where the speakers are targets for deepfakes.
Though it isn’t yet commercially available, the research shows the different opportunities to add an extra layer of protection from AI. Watermarking is a kind of collective action problem, Davis says. Everyone would benefit if we implemented all these approaches, but no one individual benefits enough. That’s why we have haphazard efforts spread across multiple industries that are highly competitive and rapidly changing.
Labeling and watermarking are important tools in the fight against slop, but they won’t be enough on their own. Simply having AI labeled doesn’t stop it from filling our lives. But it is a necessary first step.
Publishing pains
If you think it’s easier to single out AI-generated text than images or videos, think again. Publishing is one of the biggest targets of AI slop after social media. Chatbots and Google’s AI Overviews eat up articles from news sources and other digital publications and spit out wonky and potentially copyright-infringing results. AI-powered translation and record-keeping tools threaten the work of translators and historians, but the tech’s superficial understanding of cultures and nuances makes it a poor substitute.
Slop is especially pervasive in academic publishing. In a “publish or perish” culture like academia, some of it may be unintentionally or mistakenly created, especially by first-time researchers and writers. But it’s slipping into the mainstream journals, like a now-retracted study that went viral for including an obviously incorrect, overly phallic AI-generated image of a rat’s reproductive system with many typos. That’s one example, albeit a hilarious and easily recognizable one, of how AI is turbocharging bad research, particularly for companies that sell fake research to academic publishers, known as paper mills.
The respected and widely used prepublication database arXiv is one of the biggest targets for AI slop. Editorial director Ramin Zabih and scientific director Steinn Sigurdsson tell me that submissions typically increase about 20% each year; now, it’s getting “worrisomely faster,” Zabih says. AI is to blame, they say.
ArXiv gets around 2,000 submissions a day, half of which are revisions. It has automated screening tools to weed out the most obviously fraudulent or AI-generated studies, but it heavily relies on hundreds of volunteers who review the remaining papers according to their areas of expertise. It’s also had to tighten its submission guidelines, adopting an endorsement system to ensure only real people can share research. It’s not a perfect fix, Sigurdsson acknowledges, but it’s necessary to “stem the flood” of scientific slop.
“The corpus of science is getting diluted. A lot of the AI stuff is either actively wrong or it’s meaningless. It’s just noise,” says Sigurdsson. “It makes it harder to find what’s really happening, and it can misdirect people.”
There’s been so much slop that one research group used those fraudulent papers to build a machine learning tool that can recognize it. Adrian Barnett, a statistician and researcher at Queensland University of Technology, was part of the team that used retracted journal papers to train a language model to spot fake and potentially AI-generated studies, specifically for cancer research, sadly a high target area.
Paper mill-created articles “have the semblance of a paper,” Barnett says. “They know what a paper should look like, and then they spin the wheel. They might change the disease, they’ll change a protein, they’ll change a gene and presto, you’ve got a new paper.”
The tool acts as a kind of scientific spam filter. It identifies patterns, like commonly used phrases, in the templates that chatbots and human fabricators rely on to mimic academia’s style. It’s one example of how AI technology itself is being used to fight slop — AI versus AI, in many cases. But like other AI verification tools, it’s limited; it can only identify the templates it was trained on. That’s why human oversight is especially important.
Humans have gut instincts and subject matter expertise that AI doesn’t. For example, arXiv’s moderators flagged a fake series of submissions because the authors’ names stuck out to them as too stereotypically British, like characters from Jane Eyre. But the demand for human reviews leads to risk of a “death spiral,” Zahib said, where reviewers’ workloads get larger and more unpleasant, which causes them to stop reviewing, adding stress to remaining reviewers.
“There’s a bit of an arms race between writing [AI] content and tools for automatically determining it,” Zahib says. “But at this point in time, I hate to say this, it’s a battle we’re losing slowly.”
Can there be a safe haven from slop?
Part of the problem with slop — if not the entire problem — is that the handful of companies that run our online lives are also the ones building AI. Meta slammed its AI into Instagram and Facebook. Google integrated Gemini into every segment of its vast business, from search to smartphones. X is practically inseparable from Grok. It’s very difficult, and in some cases impossible, to turn off AI on certain devices and sites. Tech giants say they’re adding AI to improve our experience. But that means they have a pretty big conflict of interest when it comes to reining in slop.
They’re desperate to prove their AI models are sought after and work well. We’re the guinea pigs used to inflate their usage stats for their quarterly investor meetings. While some companies have introduced tools to help deal with slop, it’s not nearly enough. They aren’t overly interested in helping solve the problem they created.
“You cannot separate the platforms from the people making the AI,” Carrasco says. “Do I trust [tech companies] to have the right compass about AI? No, not at all.”
Meta and TikTok declined to comment on the record about efforts to rein in AI-generated content. YouTube spokesperson Boot Bullwinkle said, “AI is a tool for creativity, but it’s not a shortcut for quality,” and that to prioritize quality experiences, the company is “less likely to recommend low-quality or repetitive content.”
Other companies are swerving in the opposite direction. DiVine is one of a few AI-free social media apps, a reimagining of Vine, the short-lived short video service that predated TikTok. Created by Evan Henshaw-Plath, with funding from Twitter creator Jack Dorsey, the new video app will include an archive of over 10,000 Vines from the original app — no need to find those Vine compilations on YouTube. It’s an appealing blend of nostalgia for a less-complicated internet and an alternative reality where slop hasn’t taken over.
“We’re not anti-AI,” DiVine chief marketing officer Alice Chan says. “We just think that people deserve a place they can come where there’s a high level of trust that the content they’re seeing is real and made by real people.”
To keep AI videos off the platform, the company is working with The Guardian Project to use its identification system called proof mode, built on top of the C2PA framework, that verifies human-created content. It also plans to work with AI labs to “design checks … that look at the underlying structure of these videos,” Henshaw-Plath said in a podcast earlier this year. DiVine users will also be able to report if they see AI videos, though it won’t allow video uploads when it launches, which should help prevent slop from slipping through.
Authenticity matters now more than ever, and social media executives know it. On New Year’s Eve, Instagram chief Adam Mosseri wrote a lengthy post about needing to return to a “raw” and “imperfect” aesthetic, criticizing AI slop and defending AI use in the same paragraph. YouTube CEO Neal Mohan started 2026 with a letter explicitly stating slop is an issue and that platforms need to be “reducing the spread of low-quality, repetitive content.”
But it’s hard to imagine platforms like Instagram and YouTube will be able to return to a truly people-centric, authentic and real culture as long as they rely on algorithmic curation of recommended content, push AI features and allow people to share entirely AI-generated posts. Apps like Vine, which never demanded perfection or developed AI, might have a fighting chance.
Slopaganda and the messy web of AI in politics
AI is a power player in politics, responsible for creating a powerful new aesthetic and influencing opinions, culminating in what’s called slopaganda — AI content specifically shared to manipulate beliefs to achieve political ends, as one early study puts it.
AI is already an effective tool for influencing our beliefs, according to a recent Stanford University study. Researchers wanted to understand whether people could identify political messages written by AI and measure how effective they are in influencing beliefs. When reading an AI-created message, the vast majority of respondents (94%) couldn’t tell. Those AI-generated political messages were also as persuasive as those written by humans.
“It’s quite difficult to craft these persuasive messages in a way that resonates with people,” says Jan Voelkel, one of the study’s authors. “We thought this was quite a high bar for large language models to achieve, and we were surprised by the fact that they were already doing so well.”
It’s not necessarily a bad thing that AI can craft influential political messages when done responsibly. But AI can be used by bad actors to spread misinformation, Voelkel says. The risk is that one-person misinformation teams can use AI to sway people’s opinions while operating more efficiently than before.
One way we see the influence and normalization of slop in politics is with imagery. AI memes are a new kind of political commentary, as demonstrated by President Donald Trump and his administration: The White House’s AI image of a woman crying while being deported; Trump’s AI cartoon video of himself wearing a crown and flying a fighter jet after nationwide “No Kings” protests; Defense Secretary Pete Hegseth’s parody book cover of Franklin the Turtle holding a machine gun shooting at foreign boats; an AI-edited image that altered a woman’s face to appear as though she was crying after being arrested for protesting Immigration and Customs Enforcement.
Governments have the power to determine whether and how to regulate AI. But legislative efforts have been haphazard and scattered. Individual states have taken action, as in the case of California’s AI Transparency Act, Illinois’ limits on AI therapy, Colorado’s algorithmic discrimination rules and more. But these laws are caught in a conflict between the states and the federal government.
Trump said patchwork state regulation will prevent the US from “winning” the global AI race by slowing down innovation, which is why the Department of Justice formed a task force to crack down on state AI legislation. The administration’s AI Action Plan, meanwhile, calls for slashing regulations for AI data centers and proposes a new framework to ensure AI models are “free from top-down ideological bias,” though it’s unclear how that would play out.
Tech leaders like Apple’s Tim Cook, Amazon’s Jeff Bezos, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, Microsoft’s Bill Gates and Alphabet’s Sundar Pichai have met with Trump multiple times since he took office. With an increasingly cozy relationship to the White House, Google and OpenAI have welcomed the push to cut legal red tape around AI development.
While governments dither on regulation, tech companies have free rein to continue as they please, lightly constrained by a few AI-specific laws. Comprehensive, enforceable legislation could control the fire hose of dangerous slop, but as of now, the people responsible for it are either unable or unwilling to do so. This has never been clearer than with the rise of AI deepfakes and AI-powered image-based abuse.
Deepfakes: Fake content, real harm
Deepfakes are the most insidious form of AI slop. They’re images and videos so realistic we can’t tell whether they’re real or AI-generated.
We had deepfakes before we had AI. But pre-AI deepfakes were expensive to create, required specialized skills and weren’t always believable. AI changes that, with newer models creating content that’s indistinguishable from reality. AI democratized deepfakes, and we’re all worse off for it.
AI’s ability to produce abusive or illegal content has long been a concern. It’s why nearly all AI companies include policies outlawing those uses. But we’ve already seen that their systems meant to prevent abuse aren’t perfect.
Take OpenAI’s Sora app, for example. The app exploded in popularity last fall, letting you make videos featuring your own face and voice and the likenesses of others. Celebrities and public figures quickly asked OpenAI to stop harmful depictions of them. Bryan Cranston, the actors’ union SAG-AFTRA and the estate of Martin Luther King Jr. all reached out with their concerns to the company with concerns, which promised to build stronger safeguards.
(Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Sora requires your consent before letting other people use your likeness. Grok, the AI tool made by Elon Musk’s xAI, does not. That’s how people were able to use Grok to make AI-generated nonconsensual intimate imagery.
From late December into early January, a rush of X users asked Grok to create images that undress or nudify people in photos shared by others, primarily women. Over a nine-day period, Grok created 4.4 million images, of which 1.8 million were sexual, according to a New York Times report. The Center on Countering Digital Hate did a similar study, which estimated that Grok made approximately 3 million sexualized images over 11 days, with 23,000 of those deepfake porn images including children.
That’s millions of incidents of harassment that were enabled and efficiently automated by AI. The dehumanizing trend highlighted how easy it is for AI to be weaponized for harassment.
“The perpetrator can be literally anyone, and the victim could be literally anyone. If you have a photo online, you could be a victim of this now,” says Dani Pinter, chief legal officer at the National Center on Sexual Exploitation.
X did not respond to multiple requests for comment.
Deepfakes and nonconsensual intimate imagery are illegal under the 2025 Take It Down Act, but it also gave platforms a grace period (until May) to set up processes to take down illicit images. The enforcement mechanisms in the law only allow for the DOJ and the Federal Trade Commission to investigate the companies, Pinter says, not for individuals to sue perpetrators or tech companies. Neither organization has opened an investigation yet.
Deepfakes hit on a core issue with AI slop: our lack of control. We know AI can be used for malicious purposes, but we don’t have many individual levers to pull to fight back. Even looking at the big picture, there’s so much turmoil around AI legislation that we’re largely forced to rely on the people building AI to ensure it’s safe. The current guardrails might work sometimes, but clearly not all the time.
Grok’s AI image-based sexual abuse was “so foreseeable and so preventable,” Pinter says.
“If you designed a car, and you didn’t even check if certain equipment would explode, you would be sued to oblivion,” Pinter says. “That is a basic bottom line: Reasonable behavior by a corporate entity … It’s like [xAI] didn’t even do that basic thing.”
The story of AI slop, including deepfakes, is one of AI enabling the very worst of the internet: scams, spam and abuse. If there is a positive side, it’s that we’re not yet at the end of the story. Many groups, advocates and researchers are committed to fighting AI-powered abuse, whether that’s through new laws, new rules or better technology.
Fighting an uphill battle
Nearly every tech executive who’s building AI rationalizes that AI is simply the latest tool that can make your life easier. There’s some truth to that; AI will probably lead to welcome progress in medicine and manufacturing, for example. But we’ve seen that it’s a frighteningly efficient instrument for fraud, misinformation and abuse. So where does that leave us, as slop gushes into our lives with no relief valve in sight?
We’re never getting the pre-AI internet back. The fight against AI slop is a fight to keep the internet human, one we need now more than ever. The internet is inextricably intertwined with our humanity, and we’re inundated with so much fake content that we’re starving for anything real. Trading instant gratification and the sycophancy of AI for online experiences that are rooted in reality, maybe with a little more friction but also a lot more authenticity — that’s how we get back to using the internet in ways that give to us rather than drain us.
If we don’t, we may be headed for a truly dead internet, where AI agents interact with each other to give the illusion of activity and connection.
Substituting AI for humanity won’t work. We’ve already learned this lesson with social media. The AI slop ocean that used to be social media is driving us further from the tech’s original purpose: connecting people.
“AI slop is actively trying to destroy that. It’s actively trying to replace that part of your feed because your attention is limited, and it is actively taking away the connections that you had,” Carrasco says. “I hope that AI video and AI slop make people wake up to how far we drifted.”
Art Director | Jeffrey Hazelwood
Creative Director | Viva Tung
Video Presenter | Katelyn Chedraoui
Video Editor | JD Christison
Project Manager | Danielle Ramirez
Editors | Corinne Reichert and Jon Reed
Director of Content | Jonathan Skillings

