The Download: the future of AI moviemaking, and what to know about plug-in hybrids
When OpenAI revealed its new generative video model, Sora, last month, it invited a handful of filmmakers to try it out. This week the company published the results: seven surreal short films that leave no doubt that the future of generative video is coming fast.
The first batch of models that could turn text into video appeared in late 2022, from companies including Meta, Google, and video-tech startup Runway. It was a neat trick, but the results were grainy, glitchy, and just a few seconds long.
Fast-forward 18 months, and the best of Sora’s high-definition, photorealistic output is so stunning that some breathless observers are predicting the death of Hollywood. But fears of misuse are growing too. Read the full story.
—Will Douglas Heaven
This piece is part of MIT Technology Review’s What’s Next series, looking across industries, trends, and technologies to give you a first look at the future. You can read the rest of them here.
Interested in learning more about how filmmakers are already using Sora? Check out how three of them are already experimenting with it to create stunning videos—and find out what they told us they believe is coming next.
What to expect if you’re expecting a plug-in hybrid