Wonder Animation is a tech that transforms real video into 3D scenes with customisable camera setups, full body/face animation, and editable elements.
As AI advances, the boundaries of 3D modelling are expanding, offering unprecedented opportunities for creativity and innovation. New technologies are already redefining how we interact with digital environments, making them more responsive, intuitive, and lifelike.
Los Angeles-based Wonder Dynamics, a company under Autodesk that provides software for VFX and CG, announced the release of 'Wonder Animation' in October this year. The company claims that this new technology can transform real videos into 3D scenes with customisable camera setups, full body/face animation, and editable elements -- all in one 3D space.
In a post on Autodesk's official blog, the company said that it is aware of the misconception that "AI is a one-click solution". It aims to bring artists closer to full animation, ensuring creative control and avoiding the black-box approach of typical generative AI tools.
This year, Google Research and DeepMind teamed up to release a paper titled, 'CAT3D: Create Anything in 3D with Multi-View Diffusion Models'. CAT3D enables real-time, multi-angle viewing of 3D models and can generate 3D scenes from a single image, set of images, or a single text prompt - all in under a minute.
It outperforms existing methods, particularly when working with limited images.
For instance, platforms like Alpha3D and Mixamo offer automated rigging services, simplifying the animation process for 3D characters. Another platform, 3DFY, revolutionises architecture by enabling rapid, high-quality 3D model generation from minimal input, streamlining design processes and enhancing visualisation capabilities.
AI is transforming animation and 3D modelling, impacting artists, studios, developers, and end-users alike. By automating repetitive tasks, AI frees artists to focus on creativity, while studios benefit from faster, cost-effective production.
Independent developers and small studios gain access to advanced tools, like those from startups such as 3DAiLY, levelling the playing field. End-users experience richer, more immersive environments in gaming, VR, and digital fashion, thanks to AI-driven motion blending and modelling.
However, adapting to new workflows underscores the need for continuous learning in this rapidly evolving field.
Martin Nebelong, a 3D artist exploring VR and AI, says on X, "I used a 3D model, generated from an image, as a 'driver' for AI animation. In the near future, we'll see more 3D tools that support these types of workflows."
Despite major advancements, AI doesn't seem to replace human animators and modellers but rather augment their capabilities. AI tools handle repetitive and time-consuming tasks, freeing artists to concentrate on storytelling and design.
Earlier, AIM reported that Meshcapade, a leader in AI-driven 3D modelling, announced an advanced motion-blending feature that enhances the realism of digital avatars by enabling smooth transitions between various human movements.
This innovation reflects a broader trend in the industry, where AI is revolutionising 3D modelling by transforming static models into dynamic, lifelike representations.
Not just the above companies, even OpenAI has tried its hand in this domain.
In May 2023, OpenAI introduced Shap-E, a conditional generative model capable of creating 3D assets from text prompts. Unlike traditional models that produce a single output, Shap-E generates parameters for implicit functions, allowing for the rendering of both textured meshes and neural radiance fields (NeRFs).
This advancement enables the rapid generation of complex and diverse 3D assets, significantly accelerating the modelling process. It also takes the text-to-video models ahead significantly.
Javi Lopez, the co-founder of Magnific AI, recently posted about another tool on X.
Another notable player in this domain is Common Sense Machines (CSM), co-founded by former Google DeepMind scientist Tejas Kulkarni. CSM focuses on infusing 'common sense' into AI systems, aiming to bridge the gap between artificial intelligence and human-like understanding.
AI is enabling the generation of 3D models from minimal input. The platform facilitates the creation of game-engine-ready 3D content from inputs such as images, text, and sketches, streamlining workflows for artists and developers.
As reported by AIM earlier, this innovation reduces the time and expertise required to produce high-quality 3D assets.
However, it also raises questions: Can we trust machines to make judgement calls yet? Adding 'common sense' might make AI smarter, but could it also create new risks in how AI understands and interacts with us?
These developments underscore a significant shift in 3D modelling, where AI not only automates the creation of complex models but also imbues them with realistic motion and behaviour.
As AIM reported earlier, 3DAiLY, an Indian AI startup, is revolutionising the gaming industry by creating ultra-realistic, production-ready 3D models for both AAA and indie games.
Leveraging generative AI, 3DAiLY offers high-quality assets at a fraction of traditional costs, making advanced 3D content creation more accessible. Their models are fully rigged and compatible with various gaming engines, including Unity, Unreal, and CryEngine.
By combining artistic intuition with AI technology, such companies are breaking down barriers in 3D modelling, enabling developers to produce immersive video gaming experiences more efficiently.
The integration of AI technologies like motion blending, generative modelling, and common sense reasoning is paving the way for more immersive and interactive digital experiences across various industries, including gaming, virtual reality, and digital fashion.