There’s something both fascinating and terrifying about watching a piece of software recreate reality — or at least something that looks like it. OpenAI’s new video app, Sora, is doing just that.
According to a recent investigation into the app’s viral rise, users are feeding short clips of themselves into Sora, only to discover later that their likeness is being used in entirely fabricated videos.
Imagine opening your feed to find yourself arguing in a courtroom, or dancing in a club you’ve never visited — it’s you, but not you.
The app exploded in popularity, turning into Silicon Valley’s latest obsession almost overnight. But not everything about it feels like innovation worth celebrating.
Some early users, as described in a follow-up story on Sora’s viral “cameo” culture, said they laughed the first time they saw their friends’ AI versions.
Then it sank in: the system had recreated their voices, gestures, and faces down to the smallest detail — and there was no way to stop it before it spread.
What makes this so disorienting is how natural the results look. Unlike older deepfake software, Sora doesn’t just swap a face onto someone else’s body. It generates the entire clip from scratch — movement, lighting, even sound — in ways that fool not just the eye but the instinct.
The company says it’s added digital watermarks and “liveness checks” to prevent impersonation, but watchdog groups have found the filters easy to bypass.
Reports have surfaced of violent or racially biased clips circulating online, prompting experts to warn that, as one recent investigation bluntly put it, “the guardrails are not real.”
There’s also a broader storm brewing over consent and ownership. While OpenAI says users can request takedowns, critics argue that the entire “opt-out” framework puts the burden on individuals rather than on the creators of the system.
One researcher described it as “privacy in reverse” — you have to ask not to be included. That tension is already sparking emotional reactions across the entertainment world.
The daughter of Robin Williams, Zelda Williams, recently condemned AI recreations of her late father’s voice, calling it a “violation of his humanity.”
Her statement followed a wave of outrage from actors and creators, highlighted in a report on the backlash to synthetic celebrity videos.
Meanwhile, the tech community seems split between awe and alarm. Venture capitalist Vinod Khosla recently defended Sora, arguing that “critics have tunnel vision” and that such generative tools will democratize creativity.
In a commentary dissecting Khosla’s response to the backlash, he dismissed the controversy as resistance to progress — though he didn’t directly address the ethical nightmare of synthetic identities.
From where I’m sitting, it feels like we’re standing at the edge of something enormous — and unstable.
There’s a real thrill in watching AI nail the realism of a moving human face, but there’s also that queasy thought that you might see your own face doing something you never said or did.
It’s the kind of innovation that makes you look twice at every video online, wondering if the person on screen ever even existed.