It started like a joke — a surreal video of Sam Altman standing in a glowing field, surrounded by a crowd of animated Pokémon, grinning as he says, “I hope Nintendo doesn’t sue us.”
The clip, made entirely in OpenAI’s new Sora video generator, went viral within hours. But what seemed like harmless AI absurdity turned into a PR storm so intense that OpenAI has now rolled back one of its most controversial policies.
In a stunning reversal, the company has dropped its “use anything” copyright stance after the Altman-and-Pokémon video spread across social media like wildfire.
The original version of Sora allowed users to generate videos using copyrighted characters — even from major franchises — unless the rights holders explicitly opted out.
That meant everything from Dragon Ball to Mario could appear in user-generated clips by default.
Within days, the internet was flooded with strange, dream-logic mashups: AI versions of celebrities, animated characters appearing in realistic settings, even a bizarre short that showed Sam Altman “stealing GPUs” from Target for “AI research.”
What was meant to be a playful launch quickly turned into a legal and ethical mess.
In response, OpenAI’s leadership scrambled. Altman himself acknowledged the chaos in a follow-up post, promising a new “opt-in” framework where only content from rights holders who explicitly grant permission can appear in AI-generated videos.
It’s a sharp contrast to the earlier “anything goes” model. The company also pledged to give artists and studios “granular control” over whether their creations appear in Sora’s training data or outputs — a promise outlined in OpenAI’s updated copyright statement.
The decision wasn’t just about optics. After the viral fallout, several major Japanese studios raised concerns about Sora’s treatment of copyrighted media.
Lawmaker Akihisa Shiozaki even questioned whether Japan’s creative rights were being overlooked, pushing OpenAI to clarify its policy toward foreign IP.
Industry observers pointed out that Nintendo, famously protective of its brands, could easily have turned the meme into a courtroom reality.
That pressure reportedly pushed OpenAI to adopt an international consistency standard, as covered in a detailed analysis of Japan’s response to Sora’s copyright gap.
This isn’t the first time OpenAI has walked a tightrope between innovation and controversy. The launch of the first Sora model already raised concerns about realism, misinformation, and consent.
Some users created eerily convincing videos of public figures and fictional characters in compromising situations.
A separate investigation into Sora’s early outputs revealed instances of violent and racially biased imagery slipping past moderation — prompting calls for stricter safeguards even before the copyright scandal hit.
To its credit, OpenAI seems to have recognized that the stakes go beyond memes. As Altman’s team reworks Sora’s rules, the company faces a broader question: how do you balance creativity and control when machines can copy almost anything?
In the words of one developer I spoke to, “AI doesn’t know the difference between inspiration and infringement — but the law definitely does.”
Personally, I think this moment will be remembered less for the policy retreat and more for what it exposed: that even the people building AI can get caught off guard by how unpredictable it really is.
One ridiculous Pokémon video may have embarrassed OpenAI, but it also reminded the world that the future of AI creativity — and ownership — is still being written, one surreal clip at a time.