4 Sources
4 Sources
[1]
Meta is developing a new image and video model for a 2026 release, report says | TechCrunch
It's all hands on deck at Meta, as the company develops new AI models under its superintelligence lab led by Scale AI co-founder, Alexandr Wang. The company is now working on an image and video model codenamed "Mango" along with a new text-based model internally known as "Avocado," The Wall Street Journal reported. The tech giant plans to release the new models in the first half of 2026, the publication said, citing an internal Q&A at Meta on Thursday, where Wang and chief product officer Chris Cox unveiled the new roadmap. Wang had said Meta aims to make the text-based model better at coding while also exploring new world models that understand visual information and can reason, plan, and act without needing to be trained on every possibility. Meta has more recently fallen behind its rivals, like OpenAI, Anthropic, and Google, in the AI race. The company's AI division saw significant restructurings this year, which included leadership changes and the poaching of researchers from other top companies. However, several of the researchers who joined Meta SuperIntelligence Labs (MSL) have already left the company. Last month, the company's chief AI scientist, Yann Lecun, also announced that he's leaving to create his own startup. Meta doesn't have a winning AI product as of yet. Instead, Meta AI assistant's numbers are buoyed by the company's existing social networks spanning billions of users, since the company places the assistant in the search bar of its apps. This means the first projects and models coming out of MSL will have a lot riding on them.
[2]
Meta Platforms developing Mango AI for images and videos
Meta Platforms is developing an artificial intelligence model code-named Mango for generating images and videos to accelerate its push into multimodal AI capabilities, according to sources who spoke with The Wall Street Journal. Alexandr Wang, Meta's chief AI officer, and Chris Cox, the company's chief product officer, discussed the forthcoming model during an internal question-and-answer session held on Thursday. Sources who attended the discussion provided details on the project's progress. The Mango model is expected to launch in the first half of 2026. This timeline aligns with Meta's ongoing efforts to advance its AI technologies. Following reports of the new AI initiative, Meta shares rose 2.30 percent in trading. The development underscores the company's focus on generative AI tools. The Mango project occurs amid fierce competition with rivals such as OpenAI, Google, and Anthropic. Meta continues to invest heavily in this area to maintain its position in the generative AI landscape. Mango will be developed alongside Meta's next text-based large language model, code-named Avocado. Avocado is also slated for release in early 2026.
[3]
Meta vs Google: Mango AI to Challenge Gemini Nano Banana
Meta's 'Mango AI' Set to Challenge Google's Gemini Nano Banana With 2026 Launch Plans Meta is preparing to intensify the artificial intelligence race with its upcoming multimodal AI model, Mango. Expected to launch in 2026, Mango is designed to generate advanced images and videos in response to user requests. With this move, Meta aims to position itself strategically as a competitor to Google's Gemini Nano Banana. Released in 2025, Nano Banana has established its position in the creative AI ecosystem. However, industry experts warn that Mango's appearance may challenge the position. With Mango, Meta challenges Google's dominance in visual AI.
[4]
Meta plans new image and video AI model codenamed Mango, targets 2026 release: Report
The company is also preparing a new text-based large language model called Avocado. Meta is reportedly working on a new artificial intelligence model focused on images and videos, code-named Mango. The company is also preparing a new text-based large language model called Avocado. Both models are expected to be released in the first half of 2026. According to WSJ, Meta's chief AI officer, Alexandr Wang, spoke about the new AI models during an internal company question-and-answer session with chief product officer Chris Cox. Wang discussed the Avocado text model, saying one of its main goals is to become much better at coding tasks. In addition, Meta is starting early work on so-called world models. These systems aim to learn about the real world by taking in visual information, helping AI better understand its surroundings. Also read: How to create viral drone shots using Google Gemini AI for Instagram and Facebook: Step-by-step guide with prompts Over the summer, Meta reorganised its AI efforts and hired Wang to lead a new group called Meta Superintelligence Labs. CEO Mark Zuckerberg played a hands-on role in building the team, recruiting more than 20 researchers from OpenAI. The new division now includes over 50 researchers, engineers, and other AI experts, according to the report. Image and video generation has become a key battleground for AI companies. In September, Meta released an AI video generator called Vibes, developed in collaboration with the startup Midjourney. Less than a week later, OpenAI launched its own video generator, Sora. Also read: OpenAI launches faster ChatGPT Images with GPT Image 1.5 to rival Gemini Nano Banana: How to use it Google has also seen success in this space. Earlier this year, its Nano Banana image tool helped drive rapid growth for Gemini, increasing monthly users from 450 million in July to more than 650 million by late October. After Google released a third version of Gemini in November, OpenAI CEO Sam Altman announced a "code red" as the company worked to regain its edge on important performance benchmarks. OpenAI soon released an updated version of ChatGPT Images. Also read: Google Pixel 9a price drops by over 8,800 discount on Amazon: Check deal details here
Share
Share
Copy Link
Meta is building Mango, a new image and video AI model, alongside Avocado, a text-based large language model, both slated for early 2026 release. Chief AI officer Alexandr Wang revealed the roadmap during an internal Q&A, emphasizing improved coding capabilities and world models. The move intensifies competition with OpenAI, Google, and Anthropic in the generative AI race.
Meta is developing new artificial intelligence models under its superintelligence lab, with plans to release both an image and video model codenamed Mango AI and a text-based large language model called Avocado in the first half of 2026
1
2
. The tech giant's chief AI officer, Alexandr Wang, and chief product officer Chris Cox unveiled the roadmap during an internal question-and-answer session held on Thursday, according to sources who spoke with The Wall Street Journal2
.The announcement comes as Meta intensifies its efforts to catch up with rivals in the AI race, particularly OpenAI, Google, and Anthropic
1
. Following reports of the new AI initiative, Meta shares rose 2.30 percent in trading, reflecting investor confidence in the company's generative AI strategy2
.The new image and video model represents Meta's push into multimodal AI capabilities, designed to generate advanced images and videos in response to user requests
2
3
. With this 2026 release, Meta aims to position itself strategically as a competitor to Google's Gemini Nano Banana, which established its position in the creative AI ecosystem after its 2025 release3
.
Source: Analytics Insight
Image and video generation has become a key battleground for AI companies. Meta previously released an AI video generator called Vibes in September, developed in collaboration with startup Midjourney, while OpenAI launched its own video generator, Sora, less than a week later
4
. Google has also seen success in this space, with its Nano Banana image tool helping drive rapid growth for Gemini, increasing monthly users from 450 million in July to more than 650 million by late October4
.Alongside Mango, Meta is developing Avocado, a text-based large language model also slated for early 2026
2
4
. Alexandr Wang stated that one of Avocado's main goals is to become much better at coding tasks4
. Additionally, Wang said Meta aims to explore new world models that understand visual information and can reason, plan, and act without needing to be trained on every possibility1
.These world models aim to learn about the real world by taking in visual information, helping AI better understand its surroundings
4
. The focus on coding capabilities and visual reasoning suggests Meta is targeting multiple fronts in the competitive landscape of generative AI.Related Stories
Over the summer, Meta reorganized its AI efforts and hired Alexandr Wang, Scale AI co-founder, to lead a new group called Meta Superintelligence Labs
1
4
. CEO Mark Zuckerberg played a hands-on role in building the team, recruiting more than 20 researchers from OpenAI. The new division now includes over 50 researchers, engineers, and other AI experts4
.
Source: Digit
However, the company's AI division has seen significant restructurings this year, including leadership changes and the poaching of researchers from other top companies. Several of the researchers who joined Meta Superintelligence Labs have already left the company
1
. Last month, the company's chief AI scientist, Yann Lecun, also announced that he's leaving to create his own startup .Meta doesn't have a winning AI product as of yet. Instead, Meta AI assistant's numbers are buoyed by the company's existing social networks spanning billions of users, since the company places the assistant in the search bar of its apps . This means the first projects and models coming out of Meta Superintelligence Labs will have a lot riding on them, making the success of Mango and Avocado critical for Meta's competitive position in the visual AI sector and broader multimodal AI landscape.
Summarized by
Navi
[1]
[2]
[3]
04 Dec 2025•Technology

23 Aug 2025•Technology

28 Feb 2025•Technology

1
Technology

2
Technology

3
Technology
