Deepfake technology is blurring the lines between reality and digital manipulation, turning AI-generated likenesses into powerful tools for fraud, misinformation, and identity theft.
Celebrities have become prime targets, with deepfakes leading to financial losses, fake endorsements, and reputational damage. Scarlett Johansson’s recent legal battle over an AI-generated protest video highlights the urgent need for safeguards against AI impersonation. Topview has curated a list of the most concerning deepfake frauds and non-consensual endorsements involving celebrities.
Shocking Cases of Deepfake Deception
Scarlett Johansson’s Protest Video Controversy
An AI-generated video falsely depicted Johansson and other Jewish celebrities protesting Kanye West’s antisemitic remarks. Without consent, the video used AI versions of Drake, Steven Spielberg, and others. Johansson condemned the violation, emphasizing the urgent need for AI regulation. This follows previous deepfake incidents where her likeness was exploited in advertisements and AI chatbots.
Tom Hanks’ Fake Endorsement
A deepfake video promoted a dental insurance plan using an AI-generated version of Tom Hanks. The Hollywood icon warned fans, exposing how deepfake endorsements can deceive the public and harm reputations.
MrBeast’s iPhone Scam
A deepfake video of YouTuber MrBeast surfaced on TikTok, promoting a fraudulent giveaway of iPhones for $2. The scam exploited his trusted brand, highlighting how deepfakes can weaponize influencer credibility.
Joe Rogan’s Supplement Scandal
A manipulated AI video depicted Joe Rogan endorsing a “libido booster” supplement in a fake podcast with Andrew Huberman. Millions saw the fraudulent ad before TikTok intervened, demonstrating the dangers of AI voice cloning.
Drake and The Weeknd’s AI Music
An AI-generated track titled “Heart on My Sleeve” imitated Drake and The Weeknd’s voices, accumulating millions of streams before takedown notices were issued. The music industry is pushing for stronger AI regulations to prevent unauthorized use of artists’ voices.
Brad Pitt’s Romance Scam
Scammers used an AI-generated version of Brad Pitt to deceive a French woman into transferring €830,000. The case underscores the dangers of AI-driven romance scams and the challenges law enforcement faces in combatting them.
We Are Witnessing the Industrial-Scale Weaponization of AI
“We are witnessing the industrial-scale weaponization of AI. Deepfakes have shattered trust in what we see and hear, turning digital identity into a battleground. Celebrities and influencers are the first casualties used without consent for fraud, fake endorsements, and political manipulation. But the most devastating impact is on women, whose faces are being stolen and repurposed for AI-generated exploitation at an alarming rate. This is not just a technological challenge – it’s an existential crisis for digital ethics, privacy, and consent. The question is no longer if regulation will come, but if it will come fast enough to stop the damage.” – AI Analyst at Topview
The Deepfake Pornography Epidemic
These celebrity cases are the tip of the iceberg. By most estimates, the vast majority of deepfakes online are pornographic videos superimposing women’s faces onto explicit clips without consent. Big-name actresses, pop stars, and even private individuals have been victimized by this trend. Johansson was among the first high-profile women targeted, but even her fame offered little protection or legal remedy.
Bringing perpetrators to justice is notoriously difficult, as creators often hide behind anonymity and global internet borders. U.S. officials are increasingly alarmed – the White House recently blasted a fake AI sex video involving Taylor Swift as “alarming,” urging Congress to act. In fact, a bipartisan group of senators just introduced a bill that would let victims of deepfake porn sue those who create or share such content.
Industry and Legal Responses
Hollywood is scrambling to respond to the deepfake threat. Last year, Scarlett Johansson took legal action against an AI image app that used her face in an ad without permission, signaling that stars are ready to fight misuse of their likeness in court. The entertainment industry’s unions have also mobilized. During the recent actors’ strike, AI protections were a key sticking point – performers demanded contract language to safeguard their digital identities amid fears that studios might reuse their faces or voices via AI. “We do not take these things lightly,” Johansson’s attorney said, vowing to pursue all legal remedies against unauthorized AI clones. Lawmakers, meanwhile, are weighing new regulations.
The United States currently has no comprehensive federal law against deepfake forgeries, only a patchwork of state laws addressing certain cases (for example, California allows civil lawsuits over deepfake porn but hasn’t criminalized it). Bills in Congress aim to change that: the proposed No FAKES Act would impose penalties for creating unsanctioned digital replicas and grant individuals control over their own likeness as a form of intellectual property. If passed, such legislation would empower victims to take action against deepfake creators and hold platforms accountable for hosting bogus content.
Tech companies and social media platforms are likewise under pressure to detect and remove deepfakes more aggressively, though enforcement has so far been inconsistent. TikTok, for instance, eventually removed the fake MrBeast ad, but only after it had been viewed widely.
The Future of Digital Identity
As the dust settles on these early deepfake scandals, one thing is clear: the technology is outpacing our defenses. Hollywood’s recent experiences show how quickly an AI-generated lie can spread, hijacking a celebrity’s identity in the blink of an eye. Without stronger digital protections and clear rules, observers warn the line between truth and fiction will continue to blur – eroding trust in audio-visual media and opening the door to more fraud and manipulation.
As Johansson put it after seeing her own image deceptively repurposed, society must confront the misuse of AI “or we risk losing a hold on reality”. Her plea echoes across an entertainment industry – and indeed a world – bracing for an era in which seeing is no longer believing. The race is on to rein in deepfakes before the next scandal hits, and the stakes couldn’t be higher for personal privacy and public trust in the digital age.