Advertisement

Adobe Firefly (Beta) Review – Pros and Cons of Their New Generative AI

Adobe Firefly (Beta) Review – Pros and Cons of Their New Generative AI

The buzz surrounding artificial intelligence doesn’t seem to fade. On the contrary, following quick start-ups, major media software companies are now also venturing into this emerging field. Perhaps the strongest example of this is Adobe, which announced their own generative AI a couple of months ago. How they plan to integrate new tools into their software is crazy ambitious, to say the least. We joined the beta and tested Adobe Firefly to give you a thorough overview of its current capabilities. Spoiler alert: it doesn’t match the precision of Midjourney yet, but it will definitely reach that point.

At NAB 2023, Adobe also revealed their plans to expand Firefly’s features for quick and easy video production. This will enable users to change the color scheme of a shot by giving AI text commands or to generate a fully sketched storyboard directly from a script. Click “Make previz” and go get some coffee, while the neural network strains its deep-learning brain to create a full 3D visualization for you. What a time to be alive, right?

All jokes aside, we will have to wait and see how Adobe’s plans come to fruition. For now, we can only evaluate what already exists and is available for a test run, such as Firefly’s text-to-image generator, its extra features for creative text effects, and the recoloring of vectors. Let’s dive right in!

What is Adobe Firefly and how to test it?

Firefly is Adobe’s new area of research, which focuses completely on AI-driven tools and generative models. The developers started with image and text effect generation, but they won’t stop there. Their essential goal is to come up with all possible ways to speed up and improve creative workflows, and then integrate them into Adobe’s existing products like Photoshop, Premiere Pro, or InDesign.

Firefly is the natural extension of the technology Adobe has produced over the past 40 years, driven by the belief that people should be empowered to bring their ideas into the world precisely as they imagine them.

Quote from Adobe Firefly’s website

And that’s where the community enters the game. By participating in the beta, Adobe encourages users not only to help them develop the existing models but also to suggest new helpful features (more detailed information follows below). To become a Firefly tester, you can simply click “Join the beta” here and wait for the invitation, which may take a couple of weeks.

we tested adobe firefly - some works in the gallery
A screenshot from the Firefly web gallery. Image source: Adobe

Once you have it, you can use Adobe Firefly directly in your browser (including Chrome, Safari, and Edge on the desktop). Bear in mind that the AI beta currently doesn’t support tablets or any mobile devices.

Interface and usability of Adobe Firefly

If you are already familiar with other famous text-to-image generators like Midjourney or Stable Diffusion (we reviewed them here), the interface of Adobe Firefly will blow you away. At least, this was my first impression. Not only does it look very user-friendly, but it also has a number of creative parameters that are very easy to play with. Need another aspect ratio? Press a button. Wanna change your content type from photo to graphic? Press a button. Going for a pastel color palette? You get it, press a button.

we tested adobe firefly - showing interface
Adobe Firefly’s interface in Chrome. Credit: images created with Firefly by CineD

Essentially, when you change those styles, the program only adds words to or emits them from your text prompt below. But compared to Midjourney, where you have to chat with a chatbot to get a better result, this surface feels like a breakthrough solution. What’s curious is that while you play with different parameters, you can ask the model to apply them to pictures you’ve already generated, or you can click “Refresh” and get a completely different set of images. In the collage below, I combined three slightly varied style tests of the same girl and got different results.

we tested adobe firefly - trying out different styles
Trying out different style preferences. Credit: images created with Firefly by CineD

However, after several tests, I have to admit: the parameters don’t always work as they promise. It’s not magic yet, so you won’t instantly get the same images as “shot from above” by changing your composition properties. Neither can Firefly convert your wide-lens picture into macro photography within seconds. One piece of advice I can offer: choose all your style settings before clicking “generate” – it’s the best way to get closer to the result you want.

Another helpful trick you may overlook: by hovering your mouse over one of the images, the button “show similar” will pop up in its upper left corner. Click it and you will get different variations. Adobe also regularly shares useful tips in their live streams, like this one for example.

Adobe Firefly tested: photorealism is not its strength yet

Let’s take another look at the generated face of a girl covered with flowers, which I posted as the first test. What do you notice? First, wow! I like how Firefly offers completely different nationalities and races within one image set. The developers often stress that they are training the AI model to be non-biased, and here we can clearly see their success.

At the same time, I wouldn’t mix up these results with the actual photos (although I chose the content type “photo” for this image generation). They just don’t seem real to me, and I’m not the only one. In the comments to Adobe live-streams, other users also notice that Firefly lags behind in terms of photorealism in generating people. Maybe we are just too spoiled by Midjourney, which has developed by leaps and bounds, and already carries the title of a ”photorealistic wonder” within the AI community. To corroborate my initial impression, I tried the same text prompt in both applications, and, well, see for yourself:

we tested adobe firefly - a side by side comparison with Midjourney
Same prompt, different AI. On the left: image created by Adobe Firefly. On the right: Midjourney’s work. Credit: created for CineD

Here we see for sure that Adobe Firefly visibly struggles with creating realistic limbs – the problem all deep-learning models have to face in the development phase. The good news is: the creators know and acknowledge it. They say that while beta testers can’t avoid coming across weird artifacts in some scenarios, the neural network will keep learning and will gradually become better. Also, when you ask it to generate an interior or a landscape, you already get much smoother results.

we tested adobe firefly - landscape
A lavender field in the sunset. Image credit: created with Firefly by CineD
we tested adobe firefly - generated interior
Trying to get a realistic interior. Image credit: created with Firefly by CineD

Fast and simple text effects

The next feature we tested is a text effect generator. The tool works similarly: just type in a prompt explaining the style, add the text you want to alter, and enjoy. Here we introduce you to a high-tech steampunk variant of the CineD logo:

we tested adobe firefly - complicated text effects
Image credit: created with Firefly by CineD

The software also lets you adjust some parameters, such as changing text color (doesn’t really work), changing the background color (works, but has issues with transparency and text outlines, which developers promise will improve over time), or deciding how far your effect should stretch. One of the tips found on Firefly’s Discord suggests adding [outline-strength=10 (variable 10-100)] to the text prompt to create a bit of extra chaos and push the effect elements further outside its bounds.

After some attempts, I realized you can get much better results if you describe only two or three simple characteristics, like texture+color. Nothing more, nothing less. Such an approach creates minimalistic, yet easy-to-recognize effects.

we tested adobe firefly - other text effects
Different text effects in more minimalistic styles. Image credit: created with Firefly by CineD

Recoloring vectors in Adobe Firefly

This is a fresh tool, which wasn’t available even a week ago. To be fair, I don’t really work with vectors, so I’m not sure how difficult it is to recolor them manually. Still, let’s give artificial intelligence a try.

we tested adobe firefly - recoloring vector
Image credit: created with Firefly by CineD

In this example, I took a simple icon of a film slate from my Instagram and asked Firefly to apply a neon color palette to it. The results meet my expectations and are also available to download as SVG files. Not sure if a designer would take it as a final variant to a client, but that’s hopefully not the point of using generative AI.

What data Firefly’s AI is trained on

This is my favorite question, for sure, because it’s part of a huge discussion on ethics in the artificial intelligence field. Compared to other developers, Adobe decided to go a separate and at the same time the most legal way. They claim to train Firefly’s models exclusively using Adobe Stock images, openly licensed footage, and public domain content where copyright has expired.

This allows them to curate deep-learning mechanisms against harmful or biased content, and also to respect artists’ ownership and intellectual property rights. It also means: in the future, we will be free to use Firefly-generated content in commercial projects.

we tested adobe firefly - fantasy art
Your fantasy is the only limit. Or is it? Image credit: created with Firefly by CineD

Limitations

  • Adobe Firefly doesn’t currently support the upload or export of video content;
  • You cannot train the model on your own footage, as is possible in Stable Diffusion, for example (which is a pro and a con at the same time);
  • While still in beta, you can only use Firefly for non-commercial purposes (and be prepared for a visible watermark on your generated images, like the one you probably noticed in my examples above);
  • At the moment, Adobe’s generative AI only supports English prompts;
  • This release doesn’t allow saving your created works to Creative Cloud (this feature should be enabled in the future);
  • You won’t be able to create humorous images of famous people or brand mock-ups in Firefly, as it only uses photos of public figures that are available for commercial use on the Stock website (excluding editorial content).

Other features coming up this year

So, yes, generating images using Adobe Firefly is still far from perfect, but it’s so exciting to observe how its AI continues to evolve. The developers tease different features, which will be available for beta tests this year. Among them, for example, are image extensions, smart portraits, or an in-painting function.

we tested adobe firefly - upcoming functions like inpainting
A demonstration of the upcoming inpainting function. Image credit: Adobe

And if you have an exciting idea on how to use this technology in Adobe applications, you can join the Firefly Discord server and talk directly to the engineers (if you’re a member of the Adobe Community forum, you can also do it there). They are open to feedback and encourage the testers to take part in future exploration of AI.

Conclusion

As the purpose of the beta phase is to help Adobe create cool new tools, which will speed up our workflow someday, they ask Firefly users to report all the bugs and evaluate the generated results. You can do it directly in the interface: there are thumbs-up/down buttons on every image and also a Report tool for providing feedback. I will for sure get back to them with my detailed review.

What do you think about this new AI? Have you already tested Adobe Firefly? How did you like it? What new tools do you expect to see in future releases? Let’s talk in the comments below!

Feature image: created with Adobe Firefly by CineD.

3 Comments

Subscribe
Notify of

Filter:
all
Sort by:
latest
Filter:
all
Sort by:
latest

Take part in the CineD community experience