Market Report

Home - Events - Current article

Sora joins Adobe's family bucket, and the video can be modified to include pictures and scenes.

Adobe Family Bucket will soon have the most advanced generative AI video creation capabilities.


Today, Adobe announced plans to update a new version of Premiere Pro. This includes adding plug-ins for third-party AI video generation models, whether it is OpenAI's Sora or Runway's Gen-2 and Pika, which will soon appear in the Adobe tool system and be used by people.


Now, with the power of Adobe’s own large model Firefly, you can add or subtract content directly on top of your video footage.


Shots that feel less background-focused during transitions? Now just use OpenAI's Sora to automatically generate a paragraph.

The new tool can also be used to "extend" the length of existing footage out of thin air, but this part does not seem to be controlled by the prompt word.

Since OpenAI launched Sora in February this year, we have only been able to see the official demo of AI-generated videos in the TikTok account. The real tool is still in a "soon to be released" state. Now that the new version of Adobe Premiere Pro has been unveiled, it may indicate that the new technology will soon be launched.


For Adobe’s 33 million paid Creative Cloud users, this is an even bigger deal, and it could bring about the most radical and revolutionary change in design.


With this feature, Premiere Pro users will be able to edit, process and mix with live action video captured by traditional cameras as well as AI footage. Imagine taking a video of an actor performing a scene of escaping from a monster, and then using artificial intelligence to generate the monster - this step requires no props, costumes, or actors, and the two video clips can be accessed in the same editor and combined in in the same video file.


The same goes for animations created using more established processes (from computer to hand-drawn frames), which can be blended with matching AI footage in the same file on Premiere Pro.


It is worth mentioning that from Adobe’s demonstration, we once again saw that there is a generation gap between OpenAI’s Sora and other similar products – the video effects it generates are much better than other available tools.


Play Video


Unlike many of Adobe's previous Firefly-related announcements, this time around the new video generation tools don't yet have a release date, Adobe has only said they will launch this year.


The introduction of third-party advanced AI large models is a future-oriented exploration for video processing. According to Adobe, the idea is to give Premiere Pro users more options. Adobe also says its content credentials tags can be applied to these generated clips to identify which AI models were used to generate them.


Adobe's Premiere Pro (PR) has been one of the world's most popular video editing programs since it was first released for Mac in late 1991, and is now used by major Hollywood film editors and independent filmmakers around the world. It is about to undergo a revolution unprecedented in its 33-year history.




It's important to note that Adobe has yet to determine when these third-party AI video generators will be integrated into Premiere Pro, and the details don't appear to be fully finalized yet, with many third-party tools requiring a paid subscription upon release.


In addition, Adobe also uses its own in-house generative AI products (such as Firefly and Generative Fill, etc.), emphasizing that its models are trained on data that it owns or has licensed/righted to use, such as contributions from Adobe Stock creators content (although this is to the chagrin of some Adobe Stock photographers and artists).


Adobe ups the ante with generative AI

Adobe focuses on multimedia creation and creative software products. After the explosion of generative AI technology, the company quickly joined the battle not to be left behind.


Last week, Bloomberg reported that Adobe trained Firefly on some images generated by competitor Midjourney, itself based on the open-source AI model Stable Diffusion, trained on publicly scraped and copyrighted web data.


Today, Adobe announced that a version of Firefly's text-to-image generative model will be integrated into Premiere Pro "later this year," providing a new set of "generative AI workflows" and capabilities.


For example, the Generative Expansion feature will let video editors and filmmakers "seamlessly add frames to make video clips longer" without having to shoot any new footage, which could be a very useful and money-saving feature. Adobe also says it will allow for smoother transitions in videos, such as extending clips that end too abruptly to stay on a moment or action longer.


Firefly for Video will also enable Premiere Pro users to perform intelligent "object detection and removal" functionality, essentially highlighting objects in the video (props, characters, costumes, scenery, etc.) and allowing AI models to track them across different frames. . Users can also leverage generative AI to edit these objects into new objects, quickly change a character's clothing or props, or even remove objects entirely across multiple clips and camera angles.


Finally, Firefly for Video will also come with a text-to-video image generator, making it comparable to Sora, Runway, Pika, and Stable Video Diffusion.


Although still in preview, Adobe's next-generation AI integration for Premiere Pro is already winning applause from filmmakers and social media creatives.