Advertisement

Latest AI Launches for Animators – Text to 3D Models and More

Latest AI Launches for Animators – Text to 3D Models and More

The artificial intelligence wave is still rising, and it is difficult even for experienced surfers to stay on top of it. Every week we hear big names and huge announcements: with Google, Microsoft, and Adobe now part of this race. Don’t worry, it’s not our wish to overwhelm you even more. Instead, we specifically chose the updates relevant to the filmmaking world. The focus of this overview – a bunch of exciting AI launches for animators or those who wish to become one.

I guess everyone is familiar with Midjourney or Stable Diffusion, deep-learning image generators that have been around for a while. However, they could only produce 2D images and the initial excitement has faded. The next stage involves three-dimensional models, and that’s where research on generative AI appears to impress us again. Let’s take a look at a few of the new tools.

AI launches for animators: from text to a 3D object

This month, Open AI (the creator of the famous large language model ChatGPT) introduced Shap-E. Alex Nicol and Heewon Jun, the researchers and contributors of the company, published a paper announcing a conditional generative model for 3D assets. Sounds complicated, I know, but its basic working principle is actually quite simple. Shap-E learned to associate text or images with corresponding 3D models from the massive dataset it was trained on. So, now you can feed the neural network a few words, or a simple picture, and it will produce the best matching three-dimensional result.

Unlike ChatGPT, Shap-E doesn’t have a public beta yet, but you can download its model, inference code, and samples on GitHub for free. However, you may require some AI experience and expertise to get it up and running. Afterward, you’ll be able to access the tool on, say, Microsoft Paint 3D, and the renders you create can even be converted to STL files for 3D printers. Imagine, it’s like literally holding ideas in your hands!

If you don’t have the required knowledge or time to learn, you can still give Shap-E a go. To do so, head over here. Keep in mind that there might be a waiting period in the community queue, but feel free to experiment directly in your browser. The results are still rough, but as we know, machines learn quickly, and it’s only the beginning. To give you an idea – here are a couple of my attempts:

AI launches for animators - text to 3D models examples
An ancient temple and a green cat: first rough 3D models, generated by Shap-E. Image source: CineD

Creating animation in one click

Another AI giant – Stability AI – also recently released their text-to-animation tool. It is called Stable Animation SDK, and according to the announcement, artists and developers can now implement the most advanced Stable Diffusion models to generate stunning animations. If you’ve never heard of how this whole system works, we covered the basic process here and recommend you read the part about Stable Diffusion first.

With this new feature, users can create animation in 3 different ways:

  • by writing a simple text prompt (just like for image generation in Stable Diffusion) and then adjusting diverse parameters;
  • by adding an image input and combining it with a text prompt;
  • by providing the neural network with a video, which will become a starting point for the animation.

You can read the very thorough, official guide on how to install and use Stable Animation SDK here. Beware though, that compared to its 2D image generation predecessor, this new software is not free and offers different pricing models. Is it worth paying for? Up to you. Some users think that the animation it creates is not there yet but should get better down the line, as the model continues to learn.

AI launch for creators of virtual worlds

Let’s say you don’t want just one specific 3D object. On the contrary, you’re heading to create a whole new world behind those pixels. If you’re a seasoned animator and have worked in the gaming industry for ages, good for you! (You probably have all the skills required for such a task). If not, take a look at Lovelace Studio, who just introduced a new technology called Nyric. Simply put, it’s an AI world-generation platform, which turns text into a metaverse.

AI launches for animators - creating worlds based on text descriptions
Various generated worlds. Image source: Lovelace Studio

I should add the word “apparently” though, as Nyric is not available to the public yet. So, for now, we have to rely on the statements of developers. They promise the new AI will be a step forward in simplifying the creation of video games. Well, the showcase video looks impressive. The idea of generating realistic and fantastic worlds based on nothing other than text description – even more so. If you’re eager to try it out, you may contact the Lovelace Studio on Twitter or Discord. They actually answer quickly, and who knows, maybe it’s you who could join the ranks of lucky testers next?

AI launches in the world of graphics

A short side note for people who work with graphics and photos on a daily basis: Google announced several AI editing features for their Magic Editor, which will be included in Google Photos later this year. Among them are an easy replacement for backgrounds, removal of disturbing elements, and relocation of different subjects within one picture. Sounds like AI taking over some mundane retouching tasks, giving us more time and energy to focus on creative challenges.

Feature image source: Stability AI.

What about you? Are there any other AI launches you’re really looking forward to? Do you find such tools as text to 3D models useful? How could you implement it in your work? Let us know in the comments below!

Leave a reply

Subscribe
Notify of

Filter:
all
Sort by:
latest
Filter:
all
Sort by:
latest

Take part in the CineD community experience