New York City-based AI startup Runway has released its latest AI video generator, called Gen-3 Alpha — and judging by sample clips the company has shared so far, it’s seriously impressive.
From an astronaut running through an alley in Rio de Janeiro to a believable suburban neighborhood flooded by water and surrounded by a coral reef, Gen-3 Alpha serves as yet another reminder of how far generative AI has come.
The level of fidelity is impressive, from a strange and intimidating creature wandering down a lamp-lit street to a woman running towards a launching rocket. Human faces in particular are incredibly life-like, with one clip showing a bald man having a “wig of curly hair and sunglasses fall suddenly on his head.”
In short, it’s an impressive albeit terrifying glimpse of the near future. And judging by the clips, Runway’s latest AI model can trade blows with OpenAI’s recently announced Sora, which has yet to be released to the public. But before we can crown a winner, we’ll wait until we’ve had a chance to give either of these tools a spin for ourselves.
Some online pundits, however, seemingly have already made up their minds.
“Even if these are cherry-picked, they already look better than Sora,” one Reddit user argued.
Alongside its latest video generator, Runway is also releasing a series of fine-tuning tools, including advanced camera controls.
The company claims Gen-3 is a step towards its more ambitious goal of realizing what it calls “General World Models,” which would be an “AI system that builds an internal representation of an environment, and uses it to simulate future events within that environment.”
Like OpenAI’s Sora, Runway has yet to commit to an exact launch date for the model. It’s also unclear if Runway will charge users for access. The company already sells subscriptions for its existing AI tools, which include Gen-3’s two predecessors, among other AI-based video editing tools.
According to Runway cofounder and CTO Anastasis Germanidis, Gen-3 Alpha “will soon be available in the Runway product, and will power all the existing modes that you’re used to (text-to-video, image-to-video, video-to-video), and some new ones that only are only now possible with a more capable base model.”
Much like OpenAI’s AI video generator, the samples generated by Gen-3 Alpha are also far from pixel-perfect, from mangled text to missing body parts.
While Runway claims Gen-3 Alpha was “trained jointly on videos and images” on its website, the company stopped short of elaborating on where this data came from — an increasingly common trend among AI companies.
Seemingly pre-empting concerns over possible copyright infringement, Runway also claims that it’s been “collaborating and partnering with leading entertainment and media organizations to create custom versions of Gen-3.”
We’ll reserve judgment until we’ve had a chance to take Gen-3 Alpha for a spin — but from what we’ve seen so far, Runway isn’t messing around.
More on AI video: This AI-Generated Pixar Style Animation Just Might Blow Your Mind