OpenAI seems to have learned a hard truth about creativity and control — you can’t build the future of film without the people who own it.
In a surprising about-face, the company has announced that Sora, its viral AI video generator, will soon let content creators and studios decide how their characters and likenesses are used, while also opening the door to revenue sharing for those who opt in.
The shift, detailed in OpenAI’s latest update about new rights management and monetization plans, marks a major turn in how the company handles intellectual property.
What’s changing is not just the policy, but the power dynamic. Rights holders will now have granular control — the ability to block, approve, or profit from the use of their IP inside Sora’s generative models.
Some studios, however, are already keeping their distance. Disney, for instance, reportedly decided to exclude its characters entirely, after users started making clips that blurred the line between parody and piracy, as described in coverage of how studios are reacting to Sora’s policy update.
Meanwhile, the app itself continues to explode in popularity. The new Sora 2 update introduced lifelike motion, real-time voice synthesis, and sharper scene physics — upgrades that helped it rocket to the top of Apple’s App Store rankings within 48 hours of release, according to a report on Sora’s App Store surge.
The buzz has been so wild that early-access invite codes have been listed for resale online, with some going for hundreds of dollars.
Of course, popularity comes with pressure. Critics have accused OpenAI of moving too fast — that by giving users near-limitless creative power, it also invited chaos.
In response, the company is adding new opt-in systems for rights holders and automated detection of trademarked visuals, according to an insider analysis of Sora’s evolving copyright tools.
But even with these changes, legal experts warn that the concept of “permissioned AI creativity” is still murky territory, where the boundaries between homage, parody, and infringement are anything but clear.
The deeper worry is cultural. Many artists fear a slippery slope — where instead of paying creators for originality, AI models may recycle and remix their styles endlessly.
That tension was on full display this week, after some users generated scenes that appeared to mimic iconic movie moments, prompting OpenAI to tighten filters.
Still, the company insists its vision for Sora isn’t just about automation — it’s about collaboration.
Sam Altman has framed it as a chance to “democratize visual storytelling,” even as critics question whether democratization can coexist with copyright capitalism.
Adding to the complexity, a new report examining Sora’s social impact revealed that some rights holders still struggle to completely opt out of the system.
In practice, this means a clip or likeness might still appear in user-generated content before detection systems catch it — a loophole OpenAI admits it’s racing to close.
Behind all this noise, there’s a quiet irony. Sora’s original tagline promised to “make imagination limitless.”
Yet what we’re seeing now is a company trying to draw limits carefully enough to keep imagination safe — for creators, for studios, and maybe for itself. It’s a delicate balance: keep the artists happy, the lawyers calm, and the users entertained.
But if OpenAI can truly pull that off, it won’t just rewrite the rules of filmmaking — it’ll rewrite the meaning of creativity in the age of algorithms.
And between us, I think that’s the most fascinating part. We’re watching an experiment in real time: a clash of invention and ownership that feels both inevitable and messy, like the first draft of a movie that everyone wants to direct.