4 Sources
4 Sources
[1]
As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models
To say Meta's attempts to become a leader in AI have thus far fallen short would be like calling Mount Everest a short hike. But the company is pot-committed to the project, with plans to spend more than $600 billion earmarked for AI, so it might as well keep going. According to Axios, the company is finally on the precipice of making its latest models public, and they'll be available via open-source licensing in the future. Per the report, the new models will be the first released by Meta under the leadership of Alexandr Wang, the founder of training data giant Scale AI, which was acquired by Zuckerberg's company to try to juice its underperforming AI models. While the new releases will reportedly maintain some proprietary parts for alleged safety purposes, the company apparently plans to open-source the models, likely offering licensing agreements to firms that want to use the model instead of going full black box, like many of its competitors. The theory is probably sound for Meta. AI coding giant Cursor recently revealed that it was using the open source model Kimi 2.5, released by Moonshot AI, as the basis for its Composer 2 model. Given how costly it is to train a model from scratch, it seems likely that more operations will take this approach in the future. Meta would be the biggest player in the frontier model market to offer an open-source option, which seems like a much simpler business model than the subscription approach that its competitors are leaning into. The biggest hurdle still standing in front of Meta, though, is the possibility that its model still sucks. Meta has made its LLaMa models open source-ish (the company calls it open source, but its licensing process doesn't align with any definition of those rights) and has pushed its AI products at every turn, but next to no one is actually using them. The company tried to make a splash with the release of LLaMa 4 last year, but it wildly underperformed expectations and failed to hit expected benchmarks. Meta's attempts to get back into the mix and compete with the top dogs in the space have been a pretty spectacular failure thus far. Despite throwing $100 million pay packages at big names in the AI space and undergoing seemingly endless restructurings, the company still can't get it together. It was supposed to release a new model last month, but opted to delay that due to concerns that it is still underperforming. There were rumors that Zuckerberg and Wang were at odds because of issues behind the scenes. The fact that Wang is front and center in Axios's report about Meta's upcoming models suggests it may be time for him to sink or swim. If it falls short, he'll likely be the fall guy. One thing is for certain: if anything goes wrong, Zuck definitely won't be taking the blame.
[2]
Scoop: Meta will still open-source AI models -- just not all of them
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether. * Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources. Between the lines: The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers. * Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely and as broadly as possible around the world. The big picture: Meta has said the first family of models is designed to help it catch up to rivals after its last Llama 4 family fell significantly behind, with an aim that future models that can lead the industry. Yes, but: The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances. * Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said. And don't expect a full return to Meta's earlier openness. Wang has indicated that some of its largest new models will remain proprietary -- a shift toward a more hybrid strategy, according to sources. * Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram -- free services with global scale that competitors can't easily match. Our thought bubble: Meta's approach increasingly looks like a hedge: open enough to win developer mindshare and shape the ecosystem, but closed where it believes the biggest models confer a competitive edge. * That mirrors a broader industry shift, where even companies that champion openness are pulling back on their most powerful systems. * Alibaba recently kept its most powerful new Qwen models proprietary, reversing its own open-source playbook. Context: Wang joined Meta last year as part of a $15 billion deal with Scale AI, where he was CEO.
[3]
Report: Meta developing open-source versions of upcoming AI models - SiliconANGLE
Report: Meta developing open-source versions of upcoming AI models Meta Platforms Inc. plans to release open-source versions of its next-generation artificial intelligence models, Axios reported today. The company debuted its most capable neural network last April. Llama 4 Maverick, as the algorithm is called, is an open-source large language model with 400 billion parameters. In December, sources told Bloomberg that Meta was planning to shift to a closed-source distribution approach with future LLMs. The company is reportedly developing two proprietary frontier models. There's an LLM codenamed Avocado and a multimedia file generator known internally as Mango. Both algorithms are expected to launch this year. The open-source models detailed in today's Axios report are presumably derived from Avocado and Mango. It's unclear whether the open-source versions will launch at the same time as the proprietary editions. Axios reported that the former algorithms will launch "eventually." They're part of an effort by Meta to distribute its models "widely and as broadly as possible around the world." The open-source versions reportedly won't include all the features that are available in the closed-source versions. It's unclear which capabilities Meta will leave out. Llama 4 Maverick, the company's best open-source LLM, is based on a mixture-of-experts architecture. It's not a monolithic algorithm but rather 128 different neural networks that are each optimized for a different set of tasks. Meta's upcoming open-source models may lack some of the neural networks that power the proprietary editions. Another possibility is that the company will scale down the open-source versions' parameter counts or skip certain training steps. Frontier LLMs go through several rounds of training, including so-called post-training that occurs once their core capabilities are already in place. It's not uncommon for open-source models developers to release scaled-down algorithm versions that didn't go through post-training. AI safety is reportedly one of the reasons Meta's open-source models won't include all the features of the proprietary editions. That hints Avocado will be adept at generating cybersecurity-related code. Claude 4.6 Opus, Anthropic PBC's most capable LLM, has so far discovered hundreds of critical vulnerabilities in open-source projects. Anthropic and OpenAI Group PBC are both preparing to release new flagship LLMs. According to Axios, Meta doesn't expect its upcoming models to best the competition "across the board." However, the Facebook parent's algorithms reportedly have multiple "areas of strength" with appeal to consumers. One way Meta could seek to win over consumers is by making its models more hardware-efficient than the competition. Many frontier LLMs can't run on personal computers because of processor constraints. It's also possible that Meta will optimize the algorithms for use cases such as personal health and homework assistance that aren't prioritized by enterprise-focused models.
[4]
Meta to release open source versions of its upcoming AI models - The Economic Times
Meta plans to release new AI models under Alexandr Wang, with some versions open source but certain parts kept private for safety and competitive reasons. Unlike OpenAI and Anthropic, Meta aims to provide widely accessible, US-made models for developers and consumers, reflecting a broader industry trend toward cautious openness.Meta is reportedly preparing to launch its first set of AI models under Alexandr Wang, with plans to offer versions of those models via an open source license, according to a report by news website Axios. While Meta plans to open source some versions, it is unlikely to do so straight away or in full. The company wants to keep certain components private for now, partly to manage safety risks, the report said. This marks a subtle change. Meta has long stood out among major US tech firms for allowing developers to modify its frontier models. However, there has been growing talk that it may scale back that openness, especially as the AI race intensifies. Wang appears to be shaping this direction. He sees Meta as a way to provide AI models that are widely accessible to developers and everyday users. Unlike competitors like OpenAI and Anthropic, which mainly target businesses and governments with almost no open models, Meta aims to provide a US-built option that is open for developers. Meta argues that its strength lies in its consumer reach. By embedding AI tools into platforms such as WhatsApp, Facebook, and Instagram, it can deliver its technology to billions of users worldwide, often for free. That scale is difficult for competitors to match. The new models are also meant to help Meta catch up. Its previous Llama 4 family fell behind rivals, and newer systems from competitors are expected soon. Axios said that Meta does not necessarily expect to lead in every area, but it believes it can still stand out in ways that matter to consumers. Meta wants to stay open enough to attract developers, but closed enough to protect its biggest models so as to maintain a competitive edge. It reflects a broader industry trend, where even advocates of openness are becoming more cautious with their top-tier AI. Billionaire Elon Musk, one of OpenAI's founders, has been publicly clashing with its CEO, Sam Altman, over the shift. Musk argues that the organisation has moved away from its original mission. In his view, the name OpenAI itself reflects a commitment to openness, and he believes its models should still be freely accessible. Meanwhile, Chinese tech giant Alibaba recently chose to keep its most advanced Qwen models closed, despite previously supporting open-source releases.
Share
Share
Copy Link
Meta is preparing to release open-source versions of its next-generation AI models under Alexandr Wang's leadership, marking a strategic shift toward a hybrid approach. While some components will remain proprietary for safety and competitive reasons, the move aims to democratize AI access and provide a widely accessible, US-made alternative to enterprise-focused rivals like OpenAI and Anthropic.
Meta is preparing to release open-source versions of its upcoming AI models under the leadership of Alexandr Wang, founder of training data giant Scale AI, which Meta acquired in a $15 billion deal last year
2
. This marks the first major release under Wang's direction as Meta attempts to recover from the disappointing performance of its LLaMa 4 family, which fell significantly behind rivals and failed to hit expected benchmarks1
. The company has earmarked more than $600 billion for AI development, signaling its commitment despite previous setbacks1
.
Source: Axios
Unlike Meta's previous approach, the new strategy represents a hybrid AI strategy that balances openness with proprietary elements. While the company plans to offer open-source versions of AI models, certain components will remain closed for safety and competitive reasons
2
. This shift reflects a broader AI development industry trend where even companies championing openness are pulling back on their most powerful systems, as evidenced by Alibaba recently keeping its most advanced Qwen models proprietary2
4
.Wang's vision focuses on democratizing AI access by providing a widely accessible, US-made option for developers and consumers
2
. This consumer-focused AI approach contrasts sharply with rivals like OpenAI and Anthropic, which increasingly target enterprise clients and governments with closed models2
4
. Meta argues it reaches users more broadly than competitors by embedding AI into WhatsApp, Facebook, and Instagram—free services with global scale that rivals cannot easily match2
.
Source: Gizmodo
The company is developing two proprietary frontier models codenamed Avocado, an LLM, and Mango, a multimedia file generator, both expected to launch this year
3
. The open-source versions of AI models detailed in reports are presumably derived from these systems, though they won't include all features available in the closed-source versions3
. Meta wants to keep some pieces proprietary initially to ensure they don't add new levels of AI safety risks2
.Related Stories
Meta's upcoming AI models face stiff competition as both OpenAI and Anthropic prepare to release new flagship models representing significant advances
2
. Sources indicate Meta knows its new models may not be competitive across the board with coming releases from those labs, but believes it will have areas of strength that appeal to consumers2
3
. The company was supposed to release a new model last month but delayed due to concerns about underperformance1
.Meta's AI strategy increasingly looks like a hedge: open enough to win developer mindshare and shape the ecosystem, but closed where it believes the biggest models confer a competitive edge against rivals
2
. The approach could prove advantageous as AI coding operations like Cursor recently revealed using the open source model Kimi 2.5 as the basis for its Composer 2 model, suggesting more firms may adopt this cost-effective approach rather than training models from scratch1
. However, Meta still faces the fundamental challenge of whether its models can match the performance of competitors, particularly after LLaMa 4 Maverick, its 400-billion-parameter open-source model released last April, wildly underperformed expectations1
3
. The success of this hybrid strategy will likely determine whether Meta can finally establish itself as a serious contender in the frontier AI race, with Wang positioned as either the architect of Meta's AI resurgence or the fall guy if the models fail to deliver1
.
Source: ET
Summarized by
Navi
04 Dec 2025•Technology

26 Jul 2024

01 Apr 2025•Technology

1
Technology

2
Technology

3
Science and Research
