Robin Williams put legal mechanisms in place before his death to prevent exactly what’s happening now. A 25-year restriction on his likeness. No holograms. No digital insertions. Nothing until August 2039.
Didn’t matter.
In October 2025, Zelda Williams posted another plea asking strangers to stop sending her AI deepfakes of her father.
“Please, just stop sending me AI videos of Dad, it’s dumb, it’s a waste of time and energy, and believe me, it’s NOT what he’d want.”

Fake Apple ads. Staged award ceremonies with Betty White. TikTok slop generated by OpenAI’s Sora 2 app—all without consent, all puppeteering a man who spent his life fighting to protect his image.
Plenty of outlets covered Zelda’s statement as celebrity news. What they missed: this isn’t about one grieving daughter or one viral app. It’s about the moral chasm between how Hollywood used digital doubles in the 1990s/2000s —respectfully completing Brandon Lee and Paul Walker’s interrupted performances—versus today’s AI resurrection machines manufacturing entirely new content from the dead. Between technology serving families and technology serving engagement metrics.
This is about when cinema lost its ethical guardrails.
When CGI Meant Finishing, Not Manufacturing
31 March 1993. Brandon Lee died filming The Crow, shot by a misfired prop gun at 28 years old. 80% of the film completed. Director Alex Proyas faced an impossible choice.
The solution became one of cinema’s most respectful uses of technology.
Stunt performer Chad Stahelski—future John Wick director—stepped in as Lee’s body double. Industrial Light & Magic grafted Lee’s face from existing footage onto Stahelski’s body. That iconic shot of Eric Draven walking to a shattered window with a crow on his shoulder? Entirely Stahelski’s body with Lee’s digitally superimposed face.
Primitive by today’s standards. You can spot it if you’re looking. But it was done with one purpose: complete the performance Brandon Lee started. Honour his work. Give his family closure.
The ethics were clear because the purpose was clear.
Twenty years later, Paul Walker died halfway through Furious 7. His brothers Caleb and Cody Walker stepped forward. Weta Digital created 350 additional shots—90 using archived footage, 260 with the brothers’ faces replaced by CGI versions of Paul.
VFX supervisor Joe Letteri told The Hollywood Reporter:
“As close as the brothers were in style and mannerisms, they just weren’t Paul when Paul played his character. We really tried to limit our interpretation to things that we had seen Paul do.”
Furious 7 earned $1.5 billion and widespread acclaim for that final sequence—Brian O’Conner driving off into the sunset while “See You Again” plays. One of cinema’s great farewells.
Family consent. Completing existing performances. Respecting what the actor created.
That was the line. Technology served memory.
Robin Williams Saw It Coming
Williams died August 2014 at 63. Before his death, he assigned his publicity rights to the Windfall Foundation, a charitable organisation set up by his legal team. The estate trust filed in 2015 contains an unusual provision: zero commercial exploitation of his name, image, signature, photograph, or likeness for 25 years.
Estate planning attorney Laura Zwicker told The Hollywood Reporter:
“I haven’t seen that before. I’ve seen restrictions on types of uses—no Coke commercials for example—but not like this. It could be a privacy issue.”
It wasn’t just privacy. Williams was protecting his family from massive estate taxes—the IRS had been battling Michael Jackson’s estate over $700 million in posthumous image valuations. And he was ensuring nobody, not even his own family, could profit from resurrecting his image before 2039.
Williams rarely did commercials. Feuded with Disney in the 1990s over Aladdin merchandise. Objected to Mork & Mindy toys. Going back to his earliest success, he fought to control his likeness.
The trust was prescient. As one legal analyst noted, Williams’ representatives were likely “aware of novel technologies that have the power of essentially resurrecting dead celebrities—and hoped to avoid anything that could tarnish his legacy.”
But the trust only covers authorised use. Studios. Advertisers. Commercial exploitation.
It’s powerless against some teenager with Sora 2 making deepfakes for TikTok engagement.
That’s the void Zelda Williams is shouting into.
September 2025: The Floodgates Open
OpenAI released Sora 2 in late September—text-to-video AI model wrapped in TikTok aesthetics. Within 24 hours, number one in the App Store’s Photo & Video category. Within a week, the internet flooded with synthetic content.
Sora 2 didn’t invent this problem. Meta’s been experimenting with Vibes, its own AI video platform. Google has Veo 3. xAI’s Grok generating images. Runway ML, Synthesia, D-ID—the list keeps growing. But Sora 2 became the turning point. The moment deepfake technology went from specialized tools to social media phenomenon.
The “cameos” feature lets users upload a short video, then insert their likeness into any AI-generated scenario. Allegedly requires consent from the person being deepfaked. Enforcement of that consent model? Optimistic.
OpenAI includes watermarks and restricts public figures. Within seven days, third-party watermark removal tools were everywhere. Users discovered workarounds for the public figure ban—rephrasing prompts, alternative spellings, or simply creating content of deceased celebrities whose legal protections don’t extend to unauthorised synthetic recreations.
Like Robin Williams.
And OpenAI’s not alone in failing to prevent misuse. Every platform racing to release video generation tech faces the same enforcement problems. The difference? Sora 2 wrapped it in the addictive interface of TikTok and made it accessible to anyone with a smartphone.
Daisy Soderberg-Rivkin, former TikTok Trust & Safety manager:
“It’s as if deepfakes have gained a publicist and a distribution deal.”
Professor Hany Farid from UC Berkeley told 404 Media:
“It seems likely that the same types of abuses we’ve seen in the past will be supercharged by these new powerful tools.”
Sora 2 isn’t marketed as dangerous. It’s marketed as fun. Social media doomscrolling but with 100% fake AI content. OpenAI positions it as bringing back community.
What kind of community is built on puppeteering the dead?
“I’ve Witnessed This For YEARS”
Zelda Williams didn’t just start speaking out in October 2025. She’s been fighting since 2023, during the SAG-AFTRA strike.
“I’ve witnessed for YEARS how many people want to train these models to create/recreate actors who cannot consent, like Dad, this isn’t theoretical, it is very very real.”
After 118 days on strike, SAG-AFTRA secured “historic” AI protections in their November 2023 contract. Employment-Based Digital Replicas require 48 hours’ notice, clear consent, compensation. Synthetic Performers—digital characters not based on specific actors—require union notification.
SAG-AFTRA president Fran Drescher: “This is a golden age for SAG-AFTRA, and our union has never been more powerful.”
But actor and strike captain Kate Bond saw the gaps: “They forgot to put protections in the AI protections.”
The contract protects working actors from exploitation within the entertainment industry. Studios. Productions. Official projects.
It does nothing about democratised deepfake technology outside it. Nothing about random users generating synthetic Robin Williams content on free apps. Nothing about estates preventing AI models from being trained on deceased performers’ work.
The union fought hard. Won significant ground.
And it wasn’t enough.
The Legal Void Nobody’s Filling
The United States has no federal law protecting digital likeness rights.
State laws exist—California’s Celebrities Rights Act grants publicity rights post-mortem for 70 years. Tennessee passed the ELVIS Act in March 2024, first state to preserve voice, image, and likeness against “unethical AI use.” New York introduced postmortem rights in November 2020, lasting 40 years. Illinois updated its law in August 2024.
Patchwork. Inconsistent. Doesn’t apply across borders. Doesn’t prevent AI models from being trained on copyrighted material. Doesn’t address what happens when a user in Texas creates a deepfake of a celebrity who died in California using technology developed in another state.
Federal legislation? Stalled.
The NO FAKES Act, introduced October 2023 and officially launched July 2024, would create the first-ever intellectual property right in voice and likeness. Would allow estates to notify platforms about unauthorised replicas and demand takedowns.
As of December 2025, hasn’t passed.
Copyright Office conducting studies. Lawmakers holding hearings. SAG-AFTRA, Recording Industry Association of America, Motion Picture Association pushing for change.
None of it moves as fast as the technology.
By the time legislation catches up, Sora 3 will be out.
What Makes This Different
Brandon Lee’s digital double completed his performance. Paul Walker’s CGI recreation finished work he’d started. Done with family consent. Done to honour what existed.
Robin Williams deepfakes create new performances he never agreed to. Put words in his mouth he never spoke. Not to honour his legacy—to generate clicks for strangers.
Williams spent his life protecting his image. Fought Disney over merchandise. Put legal mechanisms in place to prevent posthumous exploitation until 2039.
Eleven years after his death, users puppet his face with free apps. Because they can.
“You’re not creating art; you’re making disgusting, over-processed caricatures out of human lives.” – Zelda Williams
That’s the difference. Completing interrupted work versus manufacturing synthetic performances. Honouring memory versus desecrating it.
“AI Is Just Badly Recycling the Past”
Zelda Williams said:
“AI is just badly recycling and regurgitating the past to be re-consumed. You are taking in the Human Centipede of content, and from the very very end of the line, all while the folks at the front laugh and laugh, consume and consume.”
Grotesque metaphor. Accurate.
AI models train on existing video, existing performances, existing creative work—much of it copyrighted, unauthorised. Remixing and regurgitating what already exists, degrading it with each iteration, spitting it back as “content.”
The people profiting? OpenAI valued at $150 billion, Meta, Google, xAI, Runway ML, Synthesia, and so on. An entire industry racing to dominate a market built on recycled human creativity.
The people whose faces are being recycled? Their grieving families? Nothing. No consent. No compensation. No control.
And this says it all…
Former OpenAI employee to NPR: “You can’t stop progress. If OpenAI didn’t release Sora, someone else would have.”
That’s not an answer. It’s an entire industry using “inevitability” as permission.
The Moral Copyright Nobody Enforces
There’s a term circulating: “the moral copyright of the human soul.”
Not legal. Philosophical. The idea that even when someone dies, even when legal protections don’t exist, there’s still a fundamental human right to not have your identity puppeteered without consent. This goes back to my Book Adaptation Rant article.
Robin Williams can’t consent.
Zelda Williams has been carrying that burden for two years, publicly advocating against AI deepfakes while grieving. She shouldn’t have to. None of the families dealing with posthumous deepfakes should have to.
But until legislation, technology, or culture catches up, they will.
Williams spent decades protecting his image. Put unprecedented legal mechanisms in place. Saw this coming.
Didn’t matter.
The technology moved faster than the law. Faster than ethics. Faster than humanity’s ability to collectively decide some lines shouldn’t be crossed.
Now the Mrs. Doubtfire and Dead Poets Society star who carefully controlled his image for 40 years gets puppeteered by teenagers for TikTok engagement.
Zelda Williams is right to be furious.
We should be too.
Zelda Williams Statements:
Robin Williams Estate & Legal Protections:
Brandon Lee & The Crow:
Paul Walker & Furious 7:
OpenAI Sora 2:
SAG-AFTRA AI Protections:
Legal & Policy Context:
