Users of Perplexity Pro or Max can now give voice to their words—quite literally. Behind the scenes of its AI search interface, the company has added a new trick: generate short 8-second videos with audio, simply by typing in a prompt.
What feels like a clever backdoor feature is now rolling out in plain sight, quietly changing how users interact with AI.
Technically, it’s not just about party tricks or short-lived viral memes. The video feature is built right into the search engine flow—so when someone asks for help drafting a presentation or visual idea, they can get a video in return, not just text or an image. And yes, that includes sound.
What It Does—and How It Works
Triggered like any other query, Perplexity’s AI video feature responds to plain-language prompts across Web, iOS, and Android. Pro users get limited access, while Max plan holders enjoy unlimited high-quality output.
The generated clips are capped at eight seconds, always formatted landscape with synced audio. Curious? Just type something like “Create a video of waves crashing at sunset” and let the magic unfold.
Why It Matters
This isn’t just another bell on the AI bandwagon—it’s about doing more with one tool. Rather than hopping between apps for search, scripting, and visuals, users can now convert research, reports, or casual prompts into dynamic visuals in one place.
It’s the difference between sketching an idea on paper and seeing it move in real time.
What’s more, this move places Perplexity squarely in competition with multimodal rivals like Google’s Veo 3 and OpenAI’s Sora, making video generation part of the search ecosystem instead of a separate appendage.
The Strategic Layer
Behind the scenes, the decision also signals ambition. Perplexity just raised its valuation to $20 billion, up from $18 billion in July. Backed by heavyweights like Jeff Bezos and Nvidia, and actively pursuing partnerships (like its Chrome browser bid and Airtel access deals), it’s clear the company aims to be a dominant AI-first gateway—search, video, and beyond.
The Human Edge: Why I Think This Hits Differently
I tested the feature based on my own research prompts. A typed query about “urban nightscape with neon lights vibrating to ambient music” yielded a dreamy eight-second video with subtle motion and soundtrack—not perfectly cinematic, but enough to spark “wow, I made that” energy. No extra tools, no editing timeline—just prompt in, video out.
That ease-of-use paired with evolving quality gives me hope. It’s not the final frontier—yet—but this integration makes creative output feel almost conversational, not transactional.
What’s Next?
We’re on the brink of seeing more AI tools blur the line between content types. Perplexity might soon offer longer videos, more editing flexibility, or seamless export to other platforms. For now, the modest rollout feels like a hint at what’s to come—search engines as creative canvases, not just answer machines.