A Sneak Peek into New AI Video Tools from Adobe – The Future of Filmmaking?

November 1st, 2023 Jump to Comment Section 3
A Sneak Peek into New AI Video Tools from Adobe – The Future of Filmmaking?

Half a year ago, during the NAB 2023, Adobe promised the huge release of their AI model Firefly for video. The announcement at the time included stuff straight out of science fiction books – things like neural networks that would help us turn our scripts into finished storyboards with one mouse click, or intelligent mechanisms capable of generating sounds that perfectly matched the visuals. While we haven’t reached this point just yet, it turns out that there are many more AI tools from Adobe to come. In the recent Adobe Sneaks session, engineers revealed a bunch of other ongoing projects that are truly mind-blowing. Some of them make me feel like the future of filmmaking is coming soon. Or is it? Let’s take a closer look together!

The Adobe Sneaks demonstration covered all kinds of projects under development, including tools for generating 3D forms, enhanced photo editing with AI, and instruments for simple video compositing. Also, you might have already heard of the project Primrose, which garnered wild reactions from all over the Internet. Its idea follows the question of how the fashion industry would change if our garments could be reconfigured as easily as their designs, by simply clicking a button. Have you ever seen a non-static digital dress that allows its owner to refresh the outfit in the blink of an eye? Definitely worth a look!

Below, we picked several “work-in-progress” AI tools from Adobe, which sounded especially interesting for filmmakers. If you want to watch the entire livestream, head over here.

Storyboarding like a pro with AI tools from Adobe

Personally, I know only a handful of filmmakers who don’t sketch out their shots prior to filming. But, the accuracy of those pictures may vary depending both on the skills of their creators and the time they have available. As you know, not every production (especially not the indie ones) can afford a professional storyboard artist, and yet you still want to bring your vision to paper and communicate it to the team. Well, lo and behold! Adobe’s project Draw and Delight might become your right hand in this respect.

At first sight, it looks like a new image generator similar to Midjourney or Firefly. Yet, you don’t have to tediously adjust the text prompt to get the picture close to your idea. What this new tool basically does is turn a very rough outline of an object or a character into a detailed doodle. In the presentation, the developer sketched a cat playing with a ball and let the AI interpret it. As you can see below, the result exceeded expectations, perfectly matching the cat’s pose. And that was only the beginning.

This model also allows creators to add some text input, easily change the pose of a generated character (for example, by commanding “the cat dances”), color it with a couple of rough brushstrokes, and insert a background or another subject using the same simple scribbling style. The most exciting part? All the images are created in vector, so you can take them to Adobe Illustrator. You can then move them around and alter them as you like. Sounds like the perfect way to create fast storyboards on your own.

Level up for content-aware fill

If you work with any VFX in Adobe After Effects, you are probably already familiar with content-aware fill. With its help, filmmakers can mask out the disturbing elements in the shot, and the tool will do its best to fill in the gaps. However, Adobe decided to take this technology to a new level and presented Fast Fill.

The demonstration consisted of three different showcases. In the first one, artificial intelligence extracted people from the background, seamlessly matching the camera movement of the video (just like content-aware fill, but a bit more advanced).

The further examples looked so unreal that I’m eager to try them out myself. With a simple mask, a few words, and a press on the “generate” button, the neural network added a tie to a walking businessman and exchanged the pattern on the moving coffee surface in a close-up shot. The crazy part about it: the new elements didn’t feel out of place. The lighting changed consistently with the rest of the video. The tie fabric moved to correspond to the body movement, and the coffee’s wobbling blended seamlessly with the new pattern, and you would never guess it was altered.

Remember all those times you struggled to insert a rebranded client’s logo onto a product to update an older commercial? Seems like that will soon be nothing but a memory.

Upscaling videos within your Adobe software

Another tool Adobe is going to release is nothing new to the market. A so-called project ResUp is an AI upscaler, as the name suggests. There are several leading applications in this branch, the widely-known Topaz Labs among them. However, the possibility to upscale low-resolution imagery directly in the Adobe software will definitely make things faster and more convenient for editors (assuming that you work with Adobe products).

Like its competitors, ResUp will process images and videos from different sources even smaller than HD. The presentation featured sequences from a drastically zoomed-in shot, a scene from an old movie, and even a tiny GIF. The results looked convincing, but we will have to wait for public access to make a final judgment.

AI tools from Adobe - ResUp upscaler project
Image source: Adobe

Dubbing tool for audio: speak several languages without learning them

And now to the project that freaked me out. Using AI, Dub Dub Dub will apparently be capable of dubbing your video clip into a foreign language automatically. In the showcase, the neural network not only translated a woman’s speech, it also recreated her voice and generated video versions in several languages, which she probably doesn’t even know. A remarkable result, but also utterly intimidating, to be honest.

AI tools from Adobe - Dub Dub Dub project
Image source: Adobe

Surely, it will make things easier for a lot of creators out there with YouTube vloggers at the forefront. Yet, I imagine that seeing myself speaking Chinese – a language I never learned – would make my hair stand on end. Also, there is an issue with processing and using my voice (or a client’s voice, for that matter). Where will all this data go? Who can have access to it?

Some comments on this tool, which I found on social media, confirm that I’m not alone with my worries. Users are afraid this new AI technology might be used in the wrong way and with bad intentions. Also, they are not sure whether it might build a distorted image in people’s minds, like “If AI can do it, why should I hire a human creator at all?”

The text-to-video approach in AI tools from Adobe

Another Adobe livestream included a couple of words on their text-to-video approach. As you probably know, several companies work on technologies that will allow us to type a scene description into a simple text box and get back a generated video (for instance, we wrote about Google research here). In fact, the Runway’s Gen2 already does it – although not flawlessly, of course – and you can even try out their neural network.

Adobe doesn’t want to be late to this party, so they also introduced the new Firefly Generative AI for video, which might be integrated into Premiere Pro in the near future.

According to the demonstration, we will also be able to upload a still image and let Firefly animate it for us. Sounds like a possible replacement for all the pans and zooms that editors and content creators often apply to such pictures to make the videos more dynamic.

On top of that, Adobe promises to enable users to train their own custom models, which hopefully will be a bit easier than figuring out this complicated task by ourselves.

The new Firefly model released and other AI tools from Adobe

All the above tools are only a sneak peek. We are not able to test or use them yet and have to wait for them to be released to the public. Will they change filmmaking and become huge parts of our everyday workflow? Maybe. Maybe not. What we have to keep in mind regardless is that these projects are only the beginning, and this technology will only get better over time.

For example, the new model of Adobe Firefly image generator that was recently released promises to deliver much more naturalistic-looking and consistent imagery than its predecessor. If you have an Adobe account, you can try it out here alongside some other graphic-based tools. We will do a thorough review of the new model soon, so stay tuned!

What is your take on the recently announced AI tools from Adobe? Do you think they might revolutionize the future of filmmaking? Which one do you think would be useful in your daily routine? Please, share your thoughts in the comments below!

Feature image source: Adobe.


Sort by:
Sort by:

Take part in the CineD community experience