Curated by THEOUTPOST
On Sat, 7 Sept, 12:02 AM UTC
2 Sources
[1]
AI Models From Google, Meta, Others May Not Be Truly 'Open Source'
Open-source AI models from Google, Meta, and others may actually be quite closed, according to a recently updated definition of the term. The lengthy new definition comes from the Open Source Initiative (OSI), which has considered itself the steward of the open source definition since its founding in 1998. The OSI has been working on an updated definition for two years. Mozilla endorses the revised definition as "critical not just for redefining what 'open source' means in the context of AI [but for] shaping the future of the technology and its impact on society." Meta's Llama 3 would not be considered "open" under the new definition, says Nik Marda, Mozilla's technical lead of AI governance and former chief of staff for the White House Office of Science and Technology Policy's Technology Division. Google's Gemma models also do not make the cut because they have limits on how people can use them, which is not permitted under the new definition, he says. "The lack of a precise definition in the past has made it easier for some companies to act like their AI was open source even when it wasn't," Marda tells PCMag. "Many - if not, most - of the models from the large commercial actors will not meet this definition." A loose definition of open source could undermine consumer products and services that use those systems, giving companies a license to change how the system works and restrict access if necessary to protect their bottom line. This could lead to "disrupted services, subpar performance, and more expensive features in the apps and tools that everyone uses on their phones, in the workplace, and across society," Marda says. We saw this in July when security researchers discovered vulnerabilities in Apple devices due to flaws in open-source code. Meta does not acknowledge OSI's definition as the new standard. Google declined to comment. "This is very new technology, and there is no singular, global definition for 'open source' AI," a Meta spokesperson tells PCMag. "Meta - like OSI - is committed to open-source AI. We are committed to keep working with the industry on these terms." The definition of open-source AI has been an ongoing matter of technical debate, which started well before OSI released the new definition. "Models purported as 'open-source' frequently employ bespoke licenses with ambiguous terms," The Linux Foundation said in an April 2024 post. "This 'open-washing' trend threatens to undermine the very premise of openness - the free sharing of knowledge to enable inspection, replication, and collective advancement." The Linux Foundation proposes a tiered approach to openness rather than a binary "open" or "closed" designation. AI writer Sriram Parthasarathy also puts Llama 3 on a spectrum of openness. "It's not as free as some open-source software but not as restricted as other AI models," he says. "In the end, Llama 3.1 is fairly open, but with some conditions." Meta CEO Mark Zuckerberg put open source at the center of the company's strategy, calling it "good for Meta" and "good for the world." He defines open-source models as those "whose weights are released publicly with a permissive license," and cites Llama as an example in an opinion piece published in The Economist last month. According to Marda, Zuckerberg presents "a very narrow definition for open source AI -- not one that actually provides the access needed for others to truly test and build fully upon it."
[2]
New definition of open source could be trouble for Big AI | Digital Trends
The Open Source Initiative (OSI), self-proclaimed steward of the open source definition, the most widely used standard for open-source software, announced an update to what constitutes an "open source AI" on Thursday. The new wording could now exclude models from industry heavyweights like Meta and Google. "Open Source has demonstrated that massive benefits accrue to everyone after removing the barriers to learning, using, sharing, and improving software systems," the OSI wrote in a recent blog post. "For AI, society needs the same essential freedoms of Open Source to enable AI developers, deployers, and end users to enjoy those same benefits." Per the OSI: Recommended Videos An Open Source AI is an AI system made available under terms and in a way that grant the freedoms[1] to: Use the system for any purpose and without having to ask for permission. Study how the system works and inspect its components. Modify the system for any purpose, including to change its output. Share the system for others to use with or without modifications, for any purpose These freedoms apply both to a fully functional system and to discrete elements of a system. A precondition to exercising these freedoms is to have access to the preferred form to make modifications to the system. Under such a definition, neither Meta's Llama 3.1, nor Google's Gemma model would count as open source AIs, Nik Marda, Mozilla's technical lead of AI governance, told PCMag. "The lack of a precise definition in the past has made it easier for some companies to act like their AI was open source even when it wasn't. Many - if not, most - of the models from the large commercial actors will not meet this definition." The older, looser definition allowed companies enough leeway to potentially undermine their consumer AI products, changing the models functionality and disabling access on the company's whim, Marda argued. Such actions could potentially lead to "disrupted services, subpar performance, and more expensive features in the apps and tools that everyone uses." Neither Meta nor Google have yet acknowledged the new definition as a standard of the industry.
Share
Share
Copy Link
Tech giants like Google and Meta face scrutiny over their 'open-source' AI models. The Open Source Initiative questions whether these models truly meet open-source criteria, sparking a debate in the tech community.
In recent weeks, a controversy has erupted in the tech world regarding the true nature of 'open-source' AI models released by major companies like Google and Meta. The Open Source Initiative (OSI), a non-profit organization that promotes open-source software, has raised concerns about whether these AI models genuinely meet the criteria for open-source designation 1.
The OSI has specifically called out several high-profile AI models, including Google's PaLM 2 and Meta's LLaMA, suggesting that these may not adhere to the true spirit of open-source software 2. This scrutiny comes as tech giants increasingly label their AI models as 'open-source' while imposing various restrictions on their use and distribution.
At the heart of this controversy lies the definition of 'open-source' itself. Traditionally, open-source software allows users the freedom to run, copy, distribute, study, change, and improve the software 1. However, many of these AI models come with limitations that may contradict these principles.
Some of the restrictions observed in these 'open-source' AI models include:
These limitations have led the OSI to question whether such models can truly be considered open-source 2.
The tech industry's response to these concerns has been mixed. While some companies defend their approach, arguing that certain restrictions are necessary for responsible AI development, others acknowledge the need for clearer definitions in the AI open-source landscape 1.
This debate raises important questions about the future of AI development and the role of open-source principles in advancing the field. As AI continues to evolve and impact various aspects of society, the resolution of this controversy could have far-reaching implications for innovation, collaboration, and ethical AI development 2.
The outcome of this debate could significantly influence how tech companies approach AI development and sharing in the future. It may lead to more stringent definitions of 'open-source' in the context of AI, potentially reshaping the landscape of AI research and development 12.
As the discussion unfolds, it remains to be seen how tech giants will respond to these criticisms and whether a new consensus on 'open-source' AI will emerge. The tech community eagerly awaits further developments in this ongoing controversy.
Reference
The open-source AI community has reached a consensus on a definition for open-source AI, marking a significant milestone in the field. However, the new definition has sparked debates and raised concerns among various stakeholders.
4 Sources
4 Sources
The Open Source Initiative (OSI) has released the Open Source AI Definition (OSAID) 1.0, establishing criteria for what qualifies as open-source AI. This definition has sparked debate and disagreement among tech companies and AI developers.
9 Sources
9 Sources
The Open Source Initiative criticizes Meta for calling its Llama AI models "open-source," arguing that the term is being misused and could harm the development of truly open AI technologies.
2 Sources
2 Sources
Exploring the challenges and complexities in the intersection of AI and open source software. The article delves into the reasons behind AI's struggle with open source principles and the complications of making AI truly open source.
2 Sources
2 Sources
The Open Source Alliance introduces the Open Weight Definition, a new framework for open-source AI models, sparking debate among industry leaders about the future of open-source AI standards.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved