‘Cheapfake’ AI Celeb Videos Are Rage-Baiting People on YouTube

WIRED found over 100 YouTube channels using AI to create lazy fan-fiction-style videos. Despite being obviously fake, there’s a psychological reason people are falling for them.
NEW YORK CITY  JULY 22 The Late Show with Stephen Colbert and guest Keanu Reeves during Monday's July 22 2024 show.
The Late Show With Stephen Colbert and guest Keanu Reeves.Photo-Illustration: WIRED Staff; Photograph: Scott Kowalchyk/Geety Images

Mark Wahlberg straightens his tie and beams at the audience as he takes his seat on daytime talk show The View, ahead of his hotly anticipated interview. Immediately, he’s unsettled by the host, Joy Behar. Something isn’t quite right about her mannerisms. Her eyes seem shifty, suspicious, even predatory. There’s a sense, almost, of the uncanny valley—her presence feels oddly inhuman. His instincts are right, of course, and he’s soon forced to defend himself against a barrage of cruel insults playing on his deepest vulnerabilities. But Wahlberg stays strong. He retains composure as Behar screams at him to get off the stage. “I'll leave, but not because you're kicking me off,” he announces. “I'll leave because I have too much respect for myself to sit here and be abused by someone who's forgotten what decency looks like!” Audience members are stunned—none more so than those watching at home on YouTube, who swiftly thumb in their words of reassurance.

“Way to go Mark. We love you,” gushes one commenter. “The View should be off the TV,” adds another. “I hope everyone they insult sues them for millions and millions till they can't even pay for production.”

It’s a scene that has been described as one of the most talked about moments in daytime television history. Except Mark Wahlberg hasn’t been a guest on The View since 2015. The inevitable twist? None of this happened in reality, but rather elapsed over the course of a 25-minute-long fan-fiction-style video, made with the magic of artificial intelligence to potentially fool 460,000 drama-hungry viewers. Hardly surprising given the towering pile of AI slop on the web has reached unpolicable levels—with recent clips so realistic they’re tripping up the most media-literate of Zoomers.

But perhaps what is surprising about this otherwise unoriginal clash of the titans is that none of this happened in the video, either. Despite its characters’ kinetic confrontation, “Mark Wahlberg Kicked Off The View After Fiery Showdown With Joy Behar” is entirely motionless, save for a grainy filter added over a still image. It entertains its audience simply with an AI voice-over, narrating an LLM-written script laden with clichés as theatrical as “fist-clenching” and “jaw wobbling.” It’s cheap, lazy—the very definition of slop—but somehow, the channel it’s hosted on, Talk Show Gold, has managed to round up over 88,000 subscribers, many of whom express complete disbelief when eventually informed by other commenters that what they are watching is “fake news.”

“These videos are what we might call ‘cheapfakes’ rather than deepfakes, as they’re cobbled together from a motley selection of real images and videoclips, with a basic AI voice-over and subtitles,” explains Simon Clark, a cognitive psychologist at the University of Bristol, who specializes in AI-generated misinformation. “At a superficial level, we might be surprised that people would be fooled by something this unsophisticated. But actually there are sound psychological factors at play here,” he adds, explaining that the videos typically focus on rhetorical techniques that encourage audiences to abandon critical thinking skills by calling to emotion.

A WIRED investigation found 120 YouTube channels employing similar tactics. With misleading names like Starfame, Media Buzz, and Celebrity Scoop, they camouflage themselves alongside real compilation clips from shows like Jimmy Kimmel Live! and Today With Jenna & Friends to gain credibility. Their channel descriptions give the illusion of melodramatic tabloid outlets—some bury their AI-disclaimers under walls of text emphasizing “all the best highlights” or “the most unforgettable, hilarious, and iconic moments,” while others omit them entirely to add to the flair.

YouTube updated its policies on July 15 in a move to crack down on content made with generative AI. The platform’s Help Center stipulates that content eligible for monetization must adhere to YouTube’s requirements of being sufficiently “authentic” and “original”—but there is no outright mention of generative AI alongside it, with the policy simply stating that eligible content must “be your original creation” and “not be mass-produced or repetitive.” A separate policy on “disclosing use of altered or synthetic content” states that creators must disclose when content “makes a real person appear to say or do something they didn’t do,” “alters footage of a real event or place,” or “generates a realistic-looking scene that didn’t actually occur.”

WIRED reached out to YouTube for comment on more than 100 AI-generated celebrity fanfic channels, as well as clarification on how YouTube’s new policies would be enforced.

“All content uploaded to YouTube must comply with our Community Guidelines, regardless of how it is generated. If we find that content violates a policy, we remove it,” Zayna Aston, director of YouTube EMEA communications said in a statement to WIRED. Aston also reiterated that channels employing deceptive practices are not permitted on the platform, including those using misleading metadata, titles, and thumbnails.

WIRED can also confirm that 37 of the flagged celebrity talk show and other fan-fiction-style channels were removed, chiefly those without AI disclaimers and some with the most egregious channel names, such as Celebrity Central and United News.

Instagram content

The story lines in these videos follow a predictable pattern that plays on age-old narrative tropes that justify fan-fiction comparisons. A well-loved celebrity—usually an older male actor like Clint Eastwood, Denzel Washington, or Keanu Reeves—is poised as the hero, defending themselves against the villain, a left-leaning talk-show host who steers the professional conversation into ad hominem. It’s obvious who the right-leaning, older audience is primed to relate to—who serves as the visual fic’s Mary Sue. There’s an undeniable political element at play when it comes to who is targeted, with videos focusing exclusively on political figures also constituting their own subgenre.

“They’re tweaking my voice or whatever they're doing, tweaking their own voice to make it sound like me, and people are commenting on it like it is me, and it ain't me,” Washington recently told WIRED when asked about AI. “I don't have an Instagram account. I don't have TikTok. I don’t have any of that. So anything you hear from that—it's not even me, and unfortunately, people are just following, and that’s the world you guys live in.”

For Clark, the talk-show videos are a clear appeal to incite moral outrage—allowing audiences to more easily engage with, and spread, misinformation. “It’s a great emotion to trigger if you want engagement. If you make someone feel sad or hurt, then they’ll likely keep that to themselves. Whereas if you make them feel outraged, then they’ll likely share the video with like-minded friends and write a long rant in the comments,” he says. It doesn’t matter either, he explains, if the events depicted aren’t real or are even clearly stated as ‘AI-generated’ if the characters involved might plausibly act this way (in the mind of their viewers, at least). In some other scenario. YouTube’s own ecosystem also inevitably plays a role. With so many viewers consuming content passively while driving, cleaning, even falling asleep, AI-generated content no longer needs to look polished when blending into a stream of passively absorbed information.

Reality Defender, a company specializing in identifying deepfakes, reviewed some of the videos. “We can share that some of our own family members and friends (particularly on the elderly side) have encountered videos like these and, though they were not completely persuaded, they did check in with us (knowing we are experts) for validity, as they were on the fence,” Ben Colman, cofounder and CEO of Reality Defender, tells WIRED.

WIRED also reached out to several channels for comment. Only one creator, owner of a channel with 43,000 subscribers, responded.

“I am just creating fictional story interviews, and I clearly mention in the description of every video,” they say, speaking anonymously. “I chose the fictional interview format because it allows me to combine storytelling, creativity, and a touch of realism in a unique way. These videos feel immersive—like you're watching a real moment unfold—and that emotional realism really draws people in. It’s like giving the audience a ‘what if?’ scenario that feels dramatic, intense, or even surprising, while still being completely fictional.”

But when it comes to the likely motive behind the channels, most of which are based outside the US, neither a strict political agenda nor a sudden career pivot to immersive storytelling serves as an adequate explainer. A channel with an email that uses the term “earningmafia,” however, hints at more obvious financial intentions, as does the channels’ repetitive nature—with WIRED seeing evidence of duplicated videos, and multiple channels operated by the same creators, including some who had sister channels suspended.

This is unsurprising, with more content farms than ever, especially those targeting the vulnerable, currently cementing themselves on YouTube alongside the rise of generative AI. Across the board, creators pick controversial topics like kids TV characters in compromising situations, even Sean Combs’ sex-trafficking trial, to generate as much engagement—and income—as possible.

Sandra Wachter, a professor and senior researcher in data ethics, AI, and algorithms at the University of Oxford, explains that this rage-bait-style content is central to the platform’s business model. “The whole idea is to keep you on the platform for as long as possible, and unfortunately, rainbows and unicorns are not the things that keep people engaged. What keeps people engaged is something that is outrageous or salacious or toxic or ragey,” she says. “And that type of content is created much cheaper now with AI. It can be done in a couple minutes.”

Most channels give their locations as outside the US, yet chosen celebrities seem almost stereotypically American, as if picked off a list of “the most popular US actors” by those wishing to attract the most trigger-happy (and therefore lucrative) online denizens. Several channels seen by WIRED also appeared to have shifted focus throughout the years—many once posted educational content and tutorials on cars, agriculture, or fitness, having seemingly abandoned these amid the AI boom. Perhaps it’s an attempt to trick an algorithm that prioritizes creators with longer lifespans or is simply a sign of users blindly following the latest trend in an attempt to bolster otherwise low income.

When told about YouTube’s new policies, Wachter says she’s hopeful that a move toward demonetization will make a positive impact—but she says we’re not “getting to the base of the problem.”

“This is a system that breeds toxicity because it's based on generating clicks and keeping eyeballs attached to a screen.”

If X posts like this are anything to go by—it won’t be long before they start to resurface. The modus operandi of channels like these is clear and seems capable of outsmarting YouTube’s authenticity policies. They aim not to trick using sophisticated special effects but instead employ age-old psychological techniques, combined with the platform’s unique habitat, to seem realistic enough as to be believed by those harboring those attitudes. They appeal to those who do not care about their realism—cashing in on views that don’t impact them. They intend to rage-bait, to instigate debate, to trigger moral outrage. They need not put much effort into their appearance, at all. Arguably, they mark a transition point between the mechanical desire to replicate human content exactly—and the eerily human trait of creating something, just good enough, to coast by.

Manisha Krishnan contributed to the reporting of this story.