6 Sources
[1]
China's Alibaba shifts towards revenue over open-source AI
Chinese tech giant Alibaba has brought in a business veteran to lead its AI division as part of a strategic shift towards models it can monetise. Zhou Jingren, the former chief technology officer of Alibaba Cloud, has taken control after internal disagreements over strategy led to the departure of senior figures from its flagship Qwen team, according to two people with knowledge of the matter and a memo sent to staff. Qwen is one of the most popular open-source options so Alibaba's focus on models it can monetise may affect AI development worldwide. Meta has made a similar shift of focus away from its Llama open-source models. "The market dynamics are evolving," said Brian Wong, a former Alibaba executive and author of The Tao of Alibaba. "If you're only building models and relying on APIs or open-source ecosystems, you're going to be in a difficult position." The moves reflect a growing consensus across the industry that, with value shifting to AI applications such as coding and agents, simply building powerful models is not enough to succeed. Alibaba currently generates the bulk of its AI-related cloud revenue from leasing out graphics processing units (GPUs) to customers. But it is now seeking to capture a greater share of spending by offering its proprietary models and integrating AI tools across its ecommerce ecosystem. Chief executive Eddie Wu said in last month's earnings call that its nascent "model-as-a-service" would become a key driver in the cloud division. MaaS, where companies pay based on usage of AI models, currently accounts for only a small share of Alibaba's cloud revenue and remains low margin due to intense competition. Wu last month announced the formation of the Alibaba Token Hub, a business unit combining its model training team and enterprise and consumer applications under a single structure designed to accelerate commercialisation. The internal shake-up -- which includes a new leadership committee on AI strategy headed by Wu -- comes as Alibaba faces rising competition from ByteDance, the creator of TikTok. ByteDance has shaped its cloud sales strategy around the consumption of "tokens", the units of data processed by AI models. The rapid rise of "agentic" AI systems, which are capable of executing multi-step tasks and independently planning with limited human supervision, requires far more computing resources than traditional chatbot queries and is driving a surge in token consumption. Duncan Clark, founder of consultancy BDA, said Alibaba's pivot amounted to "an attempt to reposition itself as the 'Google of China' -- anchoring its business around cloud infrastructure, proprietary models and in-house chips". "Monetisation from models is small and low margin for now," Clark said. "But rising use of agentic AI is providing supportive momentum." The shift comes amid a broader change in investor sentiment, as enthusiasm for model performance gives way to scrutiny over returns. Many investors now believe advances in large language models are becoming incremental, while the real opportunity lies in embedding them into products that drive sustained usage and revenue. Among those who have left Alibaba are Lin Junyang, Qwen's former technical lead, and Hui Binyuan, a researcher focused on coding. Lin was a leading proponent of Qwen's open-source approach, offering free, downloadable models that run efficiently on devices at low cost. The strategy won strong support from the global developer community and positioned Alibaba as a leader in China's open-source AI push. However, it also raised concerns internally about the lack of a clear path to commercialisation, according to several people familiar with the matter. Those concerns intensified as investor focus shifted from benchmark performance to monetisation. Lin had come under increasing pressure from senior management about the large resources being spent training open-source models, particularly after rival Chinese labs -- including MiniMax, Zhipu and Moonshot -- released new models around the lunar new year that outperformed Qwen in coding, a fast-growing area of AI demand. "Junyang's team was too focused on benchmark rankings and open source, which doesn't provide value for the cloud business," said a person familiar with Alibaba's strategy. They added that Zhou would prioritise aligning model development with the company's cloud and revenue goals. Alibaba has already released a flurry of closed-source models this month, keeping its leading models proprietary for customers accessing them through its cloud business. The person added that Alibaba planned to continue releasing advanced open-source models in some areas. Earlier this week a popular, new open-source AI video generation model called Happy Horse was released anonymously, but was developed by Alibaba, according to the person. One Alipay AI engineer who has worked with Zhou described the executive as "highly technical" and well-positioned to redirect its training efforts. "People have been carried away by Qwen's reputation and academic success. But Jingren is capable and in control of the team, with support from Alibaba leadership," said the person familiar with Alibaba's strategy.
[2]
As Meta Flounders, It Reportedly Plans to Open Source Its New AI Models
To say Meta's attempts to become a leader in AI have thus far fallen short would be like calling Mount Everest a short hike. But the company is pot-committed to the project, with plans to spend more than $600 billion earmarked for AI, so it might as well keep going. According to Axios, the company is finally on the precipice of making its latest models public, and they'll be available via open-source licensing in the future. Per the report, the new models will be the first released by Meta under the leadership of Alexandr Wang, the founder of training data giant Scale AI, which was acquired by Zuckerberg's company to try to juice its underperforming AI models. While the new releases will reportedly maintain some proprietary parts for alleged safety purposes, the company apparently plans to open-source the models, likely offering licensing agreements to firms that want to use the model instead of going full black box, like many of its competitors. The theory is probably sound for Meta. AI coding giant Cursor recently revealed that it was using the open source model Kimi 2.5, released by Moonshot AI, as the basis for its Composer 2 model. Given how costly it is to train a model from scratch, it seems likely that more operations will take this approach in the future. Meta would be the biggest player in the frontier model market to offer an open-source option, which seems like a much simpler business model than the subscription approach that its competitors are leaning into. The biggest hurdle still standing in front of Meta, though, is the possibility that its model still sucks. Meta has made its LLaMa models open source-ish (the company calls it open source, but its licensing process doesn't align with any definition of those rights) and has pushed its AI products at every turn, but next to no one is actually using them. The company tried to make a splash with the release of LLaMa 4 last year, but it wildly underperformed expectations and failed to hit expected benchmarks. Meta's attempts to get back into the mix and compete with the top dogs in the space have been a pretty spectacular failure thus far. Despite throwing $100 million pay packages at big names in the AI space and undergoing seemingly endless restructurings, the company still can't get it together. It was supposed to release a new model last month, but opted to delay that due to concerns that it is still underperforming. There were rumors that Zuckerberg and Wang were at odds because of issues behind the scenes. The fact that Wang is front and center in Axios's report about Meta's upcoming models suggests it may be time for him to sink or swim. If it falls short, he'll likely be the fall guy. One thing is for certain: if anything goes wrong, Zuck definitely won't be taking the blame.
[3]
Scoop: Meta will still open-source AI models -- just not all of them
Why it matters: Meta has been the largest U.S. player to let others modify its frontier models, and there has been growing speculation the company might retreat from that strategy altogether. * Before openly releasing versions of the new models, Meta wants to keep some pieces proprietary and to ensure they don't add new levels of safety risk, according to sources. Between the lines: The move fits with Wang's view that Meta can be a force for democratizing access to the latest AI technology and ensuring that there is a U.S.-made option that is open for developers. * Wang sees Anthropic and OpenAI as increasingly focused on delivering their models to governments and the enterprise. By contrast, Meta's effort is focused on consumers, per sources. Meta wants its models distributed as widely and as broadly as possible around the world. The big picture: Meta has said the first family of models is designed to help it catch up to rivals after its last Llama 4 family fell significantly behind, with an aim that future models that can lead the industry. Yes, but: The leaders aren't standing still. Both OpenAI and Anthropic are hinting that their next models, also expected to drop soon, represent significant advances. * Meta knows its new models may not be competitive across the board with the coming ones from those labs, but believes it will have areas of strength that appeal to consumers, the sources said. And don't expect a full return to Meta's earlier openness. Wang has indicated that some of its largest new models will remain proprietary -- a shift toward a more hybrid strategy, according to sources. * Meta argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram -- free services with global scale that competitors can't easily match. Our thought bubble: Meta's approach increasingly looks like a hedge: open enough to win developer mindshare and shape the ecosystem, but closed where it believes the biggest models confer a competitive edge. * That mirrors a broader industry shift, where even companies that champion openness are pulling back on their most powerful systems. * Alibaba recently kept its most powerful new Qwen models proprietary, reversing its own open-source playbook. Context: Wang joined Meta last year as part of a $15 billion deal with Scale AI, where he was CEO.
[4]
Alibaba and Meta retreat from open-source AI
Alibaba and Meta are both recalibrating their artificial intelligence strategies, with recent product decisions pointing to a broader industry shift toward closed or hybrid models -- even among companies long associated with open-source development. Alibaba's proprietary push Alibaba, historically known for its open-source approach to AI, has launched three new proprietary models: Wan2.7-Image for image generation, Qwen3.5-Omni for multimodal processing, and Qwen3.6-Plus focused on coding agents. According to Bloomberg, IT Home, and Wall Street CN, the models are designed for different use cases but share a common characteristic -- they are not fully open to developers. Qwen3.5-Omni, the flagship release, can process text, images, audio, and video, supports a 256K context window, and has achieved state-of-the-art performance across 215 third-party benchmarks, the outlets reported. Wan2.7-Image emphasizes visual generation and editing, while Qwen3.6-Plus targets enterprise coding applications with a context window of up to 1 million tokens. The move marks a notable departure from Alibaba's earlier strategy. Its Qwen family previously gained traction on platforms such as Hugging Face, where it has accumulated more than 113,000 derivative versions. By contrast, the latest models restrict access to source code and limit user modification, signaling a shift toward greater control in commercial applications. Alibaba said it has not abandoned open source entirely and plans to release additional versions, including smaller open models. However, the company has also raised cloud and storage prices by up to 34%, underscoring its efforts to monetize AI investments. Analysts cited by Bloomberg and Chinese media interpret the pivot as a response to intensifying competition in China's e-commerce market, which has pressured profitability. Meta's hybrid gamble Meta is also moving toward a more selective openness strategy. According to Axios, the company is preparing a new generation of AI models that will include both open-source and proprietary versions. The models, reportedly led by Chief AI Officer Alexandr Wang, include a text model codenamed "Avocado" and a multimodal system called "Mango," expected to launch in the first half of 2026. Open-source variants will be derived from these systems, though with certain limitations. SiliconAngle reported that Meta may restrict features in open versions for safety reasons, potentially reducing parameter counts, omitting parts of post-training, or removing specific neural network components. Sources cited by Axios added that some advanced capabilities -- such as code generation -- may remain exclusive to proprietary models. The shift reflects a broader reassessment within the industry. While Meta has positioned itself as a champion of open AI -- leveraging platforms like WhatsApp, Facebook, and Instagram to reach global users -- it is increasingly balancing openness with competitive and safety considerations. Meanwhile, the company faces performance challenges. Sources told Axios that Meta does not expect its upcoming models to fully surpass next-generation systems from rivals such as OpenAI and Anthropic, though it aims to remain competitive in consumer-focused applications. Openness gives way to control Recent moves by Alibaba, Meta, and others suggest a gradual tightening of access to the most advanced AI models. Even companies that built their ecosystems around open-source development are now limiting the availability of flagship systems, particularly in high-value commercial and enterprise use cases. This shift highlights a growing tension between openness and monetization. As development costs rise and competition intensifies, companies appear increasingly inclined to retain control over their most powerful technologies -- reshaping the balance between innovation, accessibility, and business sustainability in the AI sector. Article translated by Jingyue Hsiao and edited by Jerry Chen
[5]
Report: Meta developing open-source versions of upcoming AI models - SiliconANGLE
Report: Meta developing open-source versions of upcoming AI models Meta Platforms Inc. plans to release open-source versions of its next-generation artificial intelligence models, Axios reported today. The company debuted its most capable neural network last April. Llama 4 Maverick, as the algorithm is called, is an open-source large language model with 400 billion parameters. In December, sources told Bloomberg that Meta was planning to shift to a closed-source distribution approach with future LLMs. The company is reportedly developing two proprietary frontier models. There's an LLM codenamed Avocado and a multimedia file generator known internally as Mango. Both algorithms are expected to launch this year. The open-source models detailed in today's Axios report are presumably derived from Avocado and Mango. It's unclear whether the open-source versions will launch at the same time as the proprietary editions. Axios reported that the former algorithms will launch "eventually." They're part of an effort by Meta to distribute its models "widely and as broadly as possible around the world." The open-source versions reportedly won't include all the features that are available in the closed-source versions. It's unclear which capabilities Meta will leave out. Llama 4 Maverick, the company's best open-source LLM, is based on a mixture-of-experts architecture. It's not a monolithic algorithm but rather 128 different neural networks that are each optimized for a different set of tasks. Meta's upcoming open-source models may lack some of the neural networks that power the proprietary editions. Another possibility is that the company will scale down the open-source versions' parameter counts or skip certain training steps. Frontier LLMs go through several rounds of training, including so-called post-training that occurs once their core capabilities are already in place. It's not uncommon for open-source models developers to release scaled-down algorithm versions that didn't go through post-training. AI safety is reportedly one of the reasons Meta's open-source models won't include all the features of the proprietary editions. That hints Avocado will be adept at generating cybersecurity-related code. Claude 4.6 Opus, Anthropic PBC's most capable LLM, has so far discovered hundreds of critical vulnerabilities in open-source projects. Anthropic and OpenAI Group PBC are both preparing to release new flagship LLMs. According to Axios, Meta doesn't expect its upcoming models to best the competition "across the board." However, the Facebook parent's algorithms reportedly have multiple "areas of strength" with appeal to consumers. One way Meta could seek to win over consumers is by making its models more hardware-efficient than the competition. Many frontier LLMs can't run on personal computers because of processor constraints. It's also possible that Meta will optimize the algorithms for use cases such as personal health and homework assistance that aren't prioritized by enterprise-focused models.
[6]
Meta to release open source versions of its upcoming AI models - The Economic Times
Meta plans to release new AI models under Alexandr Wang, with some versions open source but certain parts kept private for safety and competitive reasons. Unlike OpenAI and Anthropic, Meta aims to provide widely accessible, US-made models for developers and consumers, reflecting a broader industry trend toward cautious openness.Meta is reportedly preparing to launch its first set of AI models under Alexandr Wang, with plans to offer versions of those models via an open source license, according to a report by news website Axios. While Meta plans to open source some versions, it is unlikely to do so straight away or in full. The company wants to keep certain components private for now, partly to manage safety risks, the report said. This marks a subtle change. Meta has long stood out among major US tech firms for allowing developers to modify its frontier models. However, there has been growing talk that it may scale back that openness, especially as the AI race intensifies. Wang appears to be shaping this direction. He sees Meta as a way to provide AI models that are widely accessible to developers and everyday users. Unlike competitors like OpenAI and Anthropic, which mainly target businesses and governments with almost no open models, Meta aims to provide a US-built option that is open for developers. Meta argues that its strength lies in its consumer reach. By embedding AI tools into platforms such as WhatsApp, Facebook, and Instagram, it can deliver its technology to billions of users worldwide, often for free. That scale is difficult for competitors to match. The new models are also meant to help Meta catch up. Its previous Llama 4 family fell behind rivals, and newer systems from competitors are expected soon. Axios said that Meta does not necessarily expect to lead in every area, but it believes it can still stand out in ways that matter to consumers. Meta wants to stay open enough to attract developers, but closed enough to protect its biggest models so as to maintain a competitive edge. It reflects a broader industry trend, where even advocates of openness are becoming more cautious with their top-tier AI. Billionaire Elon Musk, one of OpenAI's founders, has been publicly clashing with its CEO, Sam Altman, over the shift. Musk argues that the organisation has moved away from its original mission. In his view, the name OpenAI itself reflects a commitment to openness, and he believes its models should still be freely accessible. Meanwhile, Chinese tech giant Alibaba recently chose to keep its most advanced Qwen models closed, despite previously supporting open-source releases.
Share
Copy Link
Two of the world's biggest proponents of open-source AI are pulling back. Alibaba has appointed a new leader for its AI division and is prioritizing proprietary AI models after internal disputes over strategy. Meta is adopting a hybrid approach, keeping its most powerful models closed while releasing limited open-source versions. The shift reflects growing industry consensus that building powerful models isn't enough—companies need clear paths to revenue.
Alibaba and Meta, two companies that built their AI reputations on open-source development, are executing strategic pivots toward proprietary AI models and AI monetization
1
4
. The shift in AI strategy comes as industry leaders recognize that benchmark performance alone doesn't translate to sustainable business models. Alibaba has installed Zhou Jingren, former chief technology officer of Alibaba Cloud, to lead its AI division after internal disagreements led to departures from its flagship Qwen team1
. Meanwhile, Meta is preparing to release new frontier models with restricted access AI features, marking a departure from its previous commitment to fully open systems3
.
Source: Axios
The departure of Lin Junyang, Qwen's former technical lead and a leading proponent of the open-source approach, underscores the tension between developer community engagement and cloud revenue generation
1
. Lin faced mounting pressure from senior management about resources spent training open-source models, particularly after rival Chinese labs including MiniMax, Zhipu, and Moonshot released models that outperformed Qwen in coding applications. "Junyang's team was too focused on benchmark rankings and open source, which doesn't provide value for the cloud business," a person familiar with Alibaba's strategy told the Financial Times1
. Alibaba has launched three new proprietary AI models: Wan2.7-Image for image generation, Qwen3.5-Omni for multimodal processing, and Qwen3.6-Plus focused on coding agents4
. Qwen3.5-Omni achieved state-of-the-art performance across 215 third-party benchmarks and supports a 256K context window, while Qwen3.6-Plus targets enterprise clients with a context window of up to 1 million tokens.
Source: DIGITIMES
Meta is developing two proprietary frontier models—an LLM codenamed "Avocado" and a multimodal system called "Mango"—expected to launch in the first half of 2026
4
5
. Under the leadership of Alexandr Wang, founder of training data giant Scale AI acquired by Meta, the company plans to release open-source versions derived from these systems, though with significant limitations2
3
. The open-source variants may have reduced parameter counts, omit parts of post-training, or remove specific neural network components for AI safety considerations4
. Wang sees this approach as democratizing AI access while ensuring a U.S.-made option remains available to developers globally, contrasting with OpenAI and Anthropic's focus on government and enterprise markets3
. Meta acknowledges its upcoming models may not surpass next-generation systems from these competitors "across the board," but believes it will maintain areas of strength appealing to consumers3
.
Source: Gizmodo
Related Stories
The moves by both companies reflect broader industry recognition that value is shifting to AI applications rather than raw model capabilities. Alibaba currently generates most AI-related cloud revenue from leasing graphics processing units to customers, but CEO Eddie Wu announced that model-as-a-service would become a key driver for the cloud division
1
. The company formed Alibaba Token Hub, a business unit combining model training teams with enterprise and consumer applications to accelerate commercialization. This strategy mirrors ByteDance's approach of shaping cloud sales around token consumption—the units of data processed by large language models1
. The rapid rise of agentic AI systems, capable of executing multi-step tasks with limited human supervision, requires far more computing resources than traditional chatbot queries and is driving surge in token consumption.Duncan Clark, founder of consultancy BDA, characterizes Alibaba's pivot as "an attempt to reposition itself as the 'Google of China'—anchoring its business around cloud infrastructure, proprietary AI models and in-house chips"
1
. While AI monetization from models remains small and low margin currently, rising use of agentic AI provides supportive momentum. Meta's approach increasingly looks like a hedge: open enough to win developer mindshare and shape the ecosystem, but closed where it believes the biggest models confer a competitive edge3
. The company argues it still reaches users more broadly than rivals by embedding AI into WhatsApp, Facebook and Instagram—free services with global scale that competitors can't easily match. For developers and enterprises building on these platforms, the shift means evaluating whether limited open-source versions provide sufficient capabilities or whether proprietary access through cloud services becomes necessary. The tension between openness and monetization will likely define competitive positioning as development costs continue rising and companies seek sustainable business models in the AI sector.Summarized by
Navi
[4]
1
Policy and Regulation

2
Entertainment and Society

3
Technology
