Curated by THEOUTPOST
On Fri, 13 Dec, 8:09 AM UTC
7 Sources
[1]
How Meta's Motivo AI Model Can Make Digital Avatars More Lifelike
Motivo was announced by Meta on December 14 Meta is looking to make metaverse avatars more lifelike Motivo could deliver more expressive metaverse avatars Meta is researching and developing new AI models, that could have potential uses in Web3 applications. The Facebook parent firm has released an AI model called Meta Motivo, that can control the bodily movements of digital avatars. It is expected to make the overall metaverse experience better. The newly unveiled model is expected to offer optimised body motion and interaction of avatars in metaverse ecosystems. The company claims that Motivo the 'first-of-its-kind behavioural foundation model'. The AI model can enable virtual human avatars to complete a variety of complex whole-body tasks, while making virtual physics more seamless in metaverse. Through unsupervised reinforcement learning, Meta has made it convenient for Motivo to perform an array of tasks in complex environments. A novel algorithm has been deployed to train this AI model that uses an unlabelled dataset of motions to help it pick up on human-like behaviours while retaining zero-shot inference capabilities, the company said in a blog post. Announcing the launch of Motivo on X, Meta shared a short video demo showing what the integration of this model with virtual avatars would entail. The clip showed a humanoid avatar performing dance moves and kicks using whole body tasks. Meta said it's incorporating 'unsupervised reinforcement learning' to trigger these 'human-like behaviour' in virtual avatars, as part of its attempts to make them look more realistic The company says that Motivo can solve a range of whole-body control tasks. This includes motion tracking, goal pose reaching, and reward optimisation without any additional training. Reality Labs is Meta's internal unit that is working on its metaverse-related initiatives. Since being launched in 2022, Reality Labs has consecutively recorded losses. Despite the pattern, Zuckerberg has hedged his bets on the metaverse, testing newer technologies to fine-tune the overall experience. Earlier this year, Meta showcased a demo of Hyperscape which turns a smartphone camera into a gateway to photorealistic metaverse environments. Through this, the tool enables smartphones to scan 2D spaces and transform them into hyperrealistic metaverse backgrounds. In June, Meta bifurcated its Reality Labs team into two divisions, where one team was tasked to work on the metaverse-focussed Quest headsets and the other was made responsible for working on hardware wearables that Meta may launch in the future. The aim of this step was to consolidate the time the Reality Labs' team puts in to develop newer AI and Web3 technologies.
[2]
Meta releases AI model to enhance Metaverse experience
(Reuters) - Meta said on Thursday it was releasing an artificial intelligence model called Meta Motivo, which could control the movements of a human-like digital agent, with the potential to enhance Metaverse experience. The company has been plowing tens of billions of dollars into its investments in AI, augmented reality and other Metaverse technologies, driving up its capital expense forecast for 2024 to a record high of between $37 billion and $40 billion. Meta has also been releasing many of its AI models for free use by developers, believing that an open approach could benefit its business by fostering the creation of better tools for its services. "We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences," the company said in a statement. Meta Motivo addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner, the company said. Meta said it was also introducing a different training model for language modeling called the Large Concept Model (LCM), which aims to "decouple reasoning from language representation". "The LCM is a significant departure from a typical LLM. Rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space," the company said. Other AI tools released by Meta include the Video Seal, which embeds a hidden watermark into videos, making it invisible to the naked eye but traceable. (Reporting by Jeffery Dastin and Seher Dareen in Bengaluru; Editing by Sherry Jacob-Phillips)
[3]
Meta releases AI model to enhance Metaverse experience
Dec 12 (Reuters) - Meta (META.O), opens new tab said on Thursday it was releasing an artificial intelligence model called Meta Motivo, which could control the movements of a human-like digital agent, with the potential to enhance Metaverse experience. The company has been plowing tens of billions of dollars into its investments in AI, augmented reality and other Metaverse technologies, driving up its capital expense forecast for 2024 to a record high of between $37 billion and $40 billion. Meta has also been releasing many of its AI models for free use by developers, believing that an open approach could benefit its business by fostering the creation of better tools for its services. "We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences," the company said in a statement. Meta Motivo addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner, the company said. Meta said it was also introducing a different training model for language modeling called the Large Concept Model (LCM), which aims to "decouple reasoning from language representation". "The LCM is a significant departure from a typical LLM. Rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space," the company said. Other AI tools released by Meta include the Video Seal, which embeds a hidden watermark into videos, making it invisible to the naked eye but traceable. Reporting by Jeffery Dastin and Seher Dareen in Bengaluru; Editing by Sherry Jacob-Phillips Our Standards: The Thomson Reuters Trust Principles., opens new tab Suggested Topics:Artificial Intelligence
[4]
Meta Introduces Groundbreaking AI Models to Revolutionize Metaverse Realism and Content Security
AI Innovations from Meta Transform Virtual Character Interactions Meta recently revealed its new AI model, Meta Motivo, that will program and coordinate the actions of the human-like AI characters. This technology aims to make characters seem more realistic in the Metaverse or Gaming world and improve the animation industries. Unlike previous methods, where labeled data and task training are necessary, Meta Motivo relies on a large database of unlabeled motions. Due to the specified approach that fits different types of human movement, the model can produce realistic and natural-looking animations for characters. It also helps in whole-body tasks like motion and object control simultaneously without training for each motion. Meta claimed that Meta Motivo performs better than state-of-the-art methods in the unsupervised reinforcement learning domain and comparable performance in task-specific evaluations. Such developments might also help enhance non-player characters (NPCs) and animation techniques in all areas of the virtual platforms.
[5]
Meta releases AI model to enhance Metaverse experience
Meta will face trial in the antitrust case brought on by the FTC over the company's acquisitions of Instagram and WhatsApp. Meta said on Thursday it was releasing an artificial intelligence model called Meta Motivo,which could control the movements of a human-like digital agent with the potential to enhance the Metaverse experience. The company has been plowing tens of billions of dollars into its investments in AI, augmented reality and other Metaverse technologies, driving up its capital expense forecast for 2024 to a record high of between $37 billion and $40 billion. Meta has also been releasing many of its AI models for free use by developers, believing that an open approach could benefit its business by fostering the creation of better tools for itsservices. "We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types ofimmersive experiences," the company said in a statement. Meta Motivo addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner, the company said. Meta said it was also introducing a different training model for language modeling called the Large Concept Model (LCM), which aims to "decouple reasoning from language representation." "The LCM is a significant departure from a typical LLM. Rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea, represented by a full sentence in a multimodal and multilingual embedding space," the company said. Other AI tools released by Meta include the Video Seal, which embeds a hidden watermark into videos, making it invisible to the naked eye but traceable.
[6]
Meta releases AI models for motion rendering and video watermarking - SiliconANGLE
Meta releases AI models for motion rendering and video watermarking Meta Platforms Inc. has released two artificial intelligence models that can be used to generate motion animations and video watermarks. The algorithms, Motivo and Video Seal, became available on Thursday. The Facebook parent also introduced two internally-developed neural network architectures. One of them, a technology called LCM, is touted as a new approach to building large language models. Motivo, the first AI model that Meta has released, can be used to animate three-dimensional avatars of the kind often featured in virtual reality applications. The model renders avatar movements based on user-provided descriptions. It can also change the avatar's pose: a user could, for example, instruct Motivo to have a standing avatar sit or vice versa. The model automatically adapt animations to configuration changes. For instance, it could revise the way an avatar moves if a user adds wind to the virtual environment in which the avatar is installed. Usually, rendering-focused AI models have to be optimized for each specific type of motion they're used to generate. That fine-tuning requires a significant amount of resources. Meta says that Motivo doesn't require such fine-tuning, yet provides similar output quality as algorithms optimized to render specific motions. The main innovation in the model is the way it ingests data. Motivo encodes information about motions and the current state of an avatar into a single latent space, a mathematical structure that AI models use to store their knowledge. The latent space also holds rewards, data points that are used to guide an AI's training process. Meta is one of the main players in the virtual reality headset market. The company believes Motivo could help improve the quality of VR avatars and other immersive content. "We believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences," the company's researchers wrote in a blog post. Motivo released Motivo alongside Video Seal, a machine learning tool for watermarking AI-generated videos. Watermarks created by the software are invisible to the human eye. According to Meta, they can't be removed using common editing techniques such as blurring and cropping or by compressing a clip. The company previously released a similar watermarking tool for audio files. Earlier, Alphabet Inc.'s Google DeepMind lab introduced a technology called SynthID for identifying AI-generated images. Like Video Seal, SynthID generates invisible watermarks designed to be difficult to remove. Meta released its two new AI models alongside a pair of research papers. They describe two internally-developed architectures for creating neural networks. The first technology, Flow Matching, is designed to power AI models that generate multimedia content such as videos. It's positioned as an alternative to the diffusion architecture that powers most video generation algorithms. Meta has already implemented Flow Matching in several of its consumer-facing generative AI tools. "Flow Matching is a state-of-the-art generative paradigm for many modalities including generation of images, videos, audio, music, 3D structures like proteins, and more," the company's researchers detailed. Meta's other new AI architecture is called LCP, which is short for Large Concept Model. It's designed to power large language models. LLMs usually generate sentences one word fragment, or token, at a time. Models powered by Meta's LCP architecture take a different approach. "Rather than predicting the next token, the LCM is trained to predict the next concept or high-level idea, represented by a full sentence," Meta detailed. "Overall, the LCM outperforms or matches recent LLMs in the pure generative task of summarization."
[7]
Meta Says Foundation Model Gives Virtual Embodied Agents Human-Like Movements | PYMNTS.com
Meta has unveiled a foundation model for controlling the behavior of virtual embodied humanoid agents, saying it will enhance the Metaverse. The new Meta Motivo enables these agents to learn human-like movements and perform complex tasks, the company said in a Thursday (Dec. 12) blog post. Meta Motivo also adjusts to gravity, wind and other changes in the environment, without being trained for them, according to the post. "In the future, we believe this research could pave the way for fully embodied agents in the Metaverse, leading to more lifelike NPCs, democratization of character animation, and new types of immersive experiences," the post said. Meta said in April that its capital expenditures on artificial intelligence and the Metaverse-development unit Reality Labs will range between $35 billion and $40 billion by the end of the year. Those figures were $5 billion more than the company initially forecasted for developing new AI products for consumers, developers, businesses and hardware manufacturers, PYMNTS reported at the time. "We're building a number of different AI services, from our AI assistant to augmented reality apps and glasses, to APIs [application programming interfaces] that help creators engage their communities and that fans can interact with, to business APIs that we think every business eventually on our platform will use," Meta CEO Mark Zuckerberg said April 24 during the company's quarterly earnings call. Meta also said Thursday that it launched a comprehensive framework for neural video watermarking called Meta Video Seal, which is designed to verify video origins at a time when AI-generated content is on the rise. This tool can add a watermark and an optional hidden message into videos in a way that is imperceptible to the naked eye, resists common video editing efforts and can later be uncovered to determine the origin of the video, according to the blog post. "While AI tools can help bring the world closer together, it's important that we implement safeguards to mitigate the risks of imitation, manipulation and other forms of misuse that can undermine their benefits," the post said. "Post-hoc watermarking is a crucial step towards better traceability for content and AI models."
Share
Share
Copy Link
Meta has introduced Motivo, an AI model designed to improve the realism of digital avatars in the metaverse. This development aims to enhance user experience and advance Meta's ambitious metaverse project.
Meta, the parent company of Facebook, has unveiled a groundbreaking AI model called Motivo, aimed at revolutionizing the way digital avatars move and interact in virtual environments. Announced on December 14, Motivo represents a significant step forward in Meta's ambitious metaverse project 1.
Motivo is described as the 'first-of-its-kind behavioral foundation model' that employs unsupervised reinforcement learning to enable virtual human avatars to complete a variety of complex whole-body tasks 1. The model utilizes an unlabeled dataset of motions and a novel algorithm to learn human-like behaviors while maintaining zero-shot inference capabilities 1.
Unlike traditional methods that require labeled data and task-specific training, Motivo can generate realistic and natural-looking animations for characters across various types of human movement 4.
Meta believes that Motivo could pave the way for fully embodied agents in the metaverse, leading to more lifelike non-player characters (NPCs), democratization of character animation, and new types of immersive experiences 2. The model addresses body control problems commonly seen in digital avatars, enabling them to perform movements in a more realistic, human-like manner 3.
The release of Motivo is part of Meta's broader strategy of investing heavily in AI, augmented reality, and other metaverse technologies. The company has forecasted its capital expenses for 2024 to reach a record high of between $37 billion and $40 billion 3. Despite consecutive losses recorded by its Reality Labs division since 2022, CEO Mark Zuckerberg continues to bet on the metaverse, testing newer technologies to refine the overall experience 1.
Alongside Motivo, Meta has introduced other AI tools to enhance its metaverse ecosystem:
Large Concept Model (LCM): A new approach to language modeling that aims to "decouple reasoning from language representation" 3.
Video Seal: An AI tool that embeds hidden watermarks into videos, making them invisible to the naked eye but traceable 5.
Meta has been releasing many of its AI models for free use by developers, believing that an open approach could benefit its business by fostering the creation of better tools for its services 3. This strategy could potentially accelerate the development of more advanced and realistic metaverse experiences.
As Meta continues to push the boundaries of AI and virtual reality technology, the introduction of Motivo marks a significant milestone in the company's quest to create a more immersive and lifelike metaverse experience.
Reference
[1]
[2]
[4]
Meta has released a range of new AI models and tools, including SAM 2.1, Spirit LM, and Movie Gen, focusing on open-source development and collaboration with filmmakers to drive innovation in various fields.
2 Sources
2 Sources
Meta has introduced a voice mode for its AI assistant, allowing users to engage in conversations and share photos. This update, along with other AI advancements, marks a significant step in Meta's AI strategy across its platforms.
10 Sources
10 Sources
Meta introduces groundbreaking AI technology for creating realistic video avatars of influencers, enabling auto-dubbing and lip-syncing across languages. The innovation raises both excitement and ethical concerns in the digital content creation landscape.
3 Sources
3 Sources
Meta has introduced a groundbreaking AI model called the "Self-Taught Evaluator" that can autonomously assess and improve other AI systems, potentially reducing human involvement in AI development.
7 Sources
7 Sources
Meta has released Llama 3, its latest and most advanced AI language model, boasting significant improvements in language processing and mathematical capabilities. This update positions Meta as a strong contender in the AI race, with potential impacts on various industries and startups.
22 Sources
22 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved