Curated by THEOUTPOST
On Sat, 7 Sept, 12:01 AM UTC
2 Sources
[1]
New global standard aims to build security around large language models
A new global standard has been released to help organizations manage the risks of integrating large language models (LLMs) into their systems and address the ambiguities around these models. The framework offers guidelines for different phases across the lifecycle of LLMs, spanning "development, deployment, and maintenance," according to the World Digital Technology Academy (WDTA), which released the document on Friday. The Geneva-based non-government organization (NGO) operates under the United Nations and was established last year to drive the development of standards in the digital realm. Also: Understanding RAG: How to integrate generative AI LLMs with your business knowledge "The standard emphasizes a multi-layered approach to security, encompassing network, system, platform and application, model, and data layers," WDTA said. "It leverages key concepts such as the Machine Learning Bill of Materials, zero trust architecture, and continuous monitoring and auditing. These concepts are designed to ensure the integrity, availability, confidentiality, controllability, and reliability of LLM systems throughout their supply chain." Dubbed the AI-STR-03 standard, the new framework aims to identify and assess challenges with integrating artificial intelligence (AI) technologies, specifically LLMs, within current IT ecosystems, WDTA said. This is essential as these AI models may be used in products or services operated fully or partially by third parties, but not managed by them. Also: Business leaders are losing faith in IT, according to this IBM study. Here's why Security requirements related to the system structure of LLMs -- referred to as supply chain security requirements, encompass requirements for the network layer, system layer, platform and application layer, model layer, and data layer. These ensure the product and its systems, components, models, data, and tools are protected against tampering or unauthorized replacement throughout the lifecycle of LLM products. WDTA said this involves the implementation of controls and continuous monitoring at each stage of the supply chain. It also addresses common vulnerabilities in middleware security to prevent unauthorized access and safeguards against the risk of poisoning training data used by engineers. It further enforces a zero-trust architecture to mitigate internal threats. Also: Safety guidelines provide necessary first layer of data protection in AI gold rush "By maintaining the integrity of every stage, from data acquisition to supplier deployment, consumers using LLMs can ensure the LLM products remain secure and trustworthy," WDTA said. LLM supply chain security requirements also address the need for availability, confidentiality, control, reliability, and visibility. These collectively work to ensure data transmitted along the supply chain is not disclosed to unauthorized individuals, ultimately establishing transparency, so consumers understand how their data is managed. It also provides visibility of the supply chain so, for instance, if a model is updated with new training data, the status of the AI model -- before and after the training data was added -- is properly documented and traceable. The new framework was drafted and reviewed by a working group that comprises several tech companies and institutions, including Microsoft, Google, Meta, Cloud Security Alliance Greater China Region, Nanyang Technological University in Singapore, Tencent Cloud, and Baidu. According to WDTA, It is the first international standard that attends to LLM supply chain security. Also: Transparency is sorely lacking amid growing AI interest International cooperation on AI-related standards is increasingly crucial as AI continues to advance and impact various sectors worldwide, the WDTA added. "Achieving trustworthy AI is a global endeavor, demanding the creation of effective governance tools and processes that transcend national borders," the NGO said. "Global standardization plays a crucial role in this context, providing a key avenue for promoting alignment on best practice and interoperability of AI governance regimes." Also: Enterprises will need AI governance as large language models grow in number Microsoft's technology strategist Lars Ruddigkeit said the new framework does not aim to be perfect but provides the foundation for an international standard. "We want to establish what is the minimum that must be achieved," Ruddigkeit said. "There's a lot of ambiguity and uncertainty currently around LLMs and other emerging technologies, which makes it hard for institutions, companies, and governments to decide what would be a meaningful standard. The WDTA supply chain standard tries to bring this first road to a safe future on track."
[2]
First international standard for LLMs to be developed by US-China tech coalition
In an unexpected move toward AI governance globally, China and the US have collaborated to develop the world's first international standard for large language models security in supply chains. The development comes from joint efforts by Chinese giants Ant, Baidu and Tencent, together with US firms Google, Meta and Microsoft. The 'Large Language Model Security Requirements for Supply Chain' initiative was unveiled today at the World Digital Technology Academy (WDTA) in Shanghai. The new standard is aimed to address the entire lifecycle of LLMs in order to prevent security risks like data leaks, model tampering and supplier non-compliance. Top academic and industry institutions, such as the Cloud Security Alliance Greater China Region and Nanyang Technological University in Singapore, joined the American and Chinese companies in drafting and reviewing the guidance. Together, the bodies form the AI Safety, Trust, and Responsibility (AI STR). Peter Major, Chair of the United Nations Commission on Science and Technology for Development and Honorary Chairman of the WDTA, commented: "International cooperation on AI-related standards has become increasingly crucial as artificial intelligence continues to advance and impact various sectors globally." In a blog post, the WDTA added: "This international cooperation is essential for managing the risks associated with AI while maximizing its benefits for all societies." As generative AI technologies continue to develop, companies have called for greater measure to enhance safety. OpenAI CEO Sam Altman previously called for "full-stack safety efforts." Despite the US' intentions to inhibit Chinese technological and military advancements with export restrictions, the two nations have cooperated on this effort, underscoring the crucial nature of establishing mutually agreeable and clear guidance. Moreover, China became the world's first country to regular generative AI. Other nations and regions are continuing to play catch-up.
Share
Share
Copy Link
A new global standard is being developed to enhance security and reliability in large language models (LLMs). This initiative involves a coalition of tech companies from the US and China, marking a significant step in AI governance.
In a groundbreaking move, a coalition of tech giants from the United States and China is joining forces to develop the first international standard for large language models (LLMs). This initiative, led by the Institute of Electrical and Electronics Engineers (IEEE), aims to address the growing concerns surrounding the security and reliability of AI systems 1.
The coalition includes major tech companies such as Alibaba, Huawei, SenseTime, and Xiaomi from China, alongside American firms like Microsoft and Nvidia. This collaboration marks a significant step in international cooperation on AI governance, despite ongoing geopolitical tensions between the two countries 2.
The primary goal of this standard, known as IEEE 3109, is to establish guidelines for the secure and reliable deployment of LLMs. It will focus on crucial aspects such as robustness, trustworthiness, and privacy protection in AI systems 1.
As LLMs become increasingly prevalent in various applications, from chatbots to content generation, concerns about their potential risks have grown. The new standard aims to tackle issues such as:
By setting these standards, the coalition hopes to build greater trust in AI technologies and promote their responsible development and use 2.
The IEEE expects to complete the initial draft of the standard by the end of 2023, with the final version anticipated to be ready by 2024. Once established, this standard could serve as a benchmark for AI developers and users worldwide, potentially influencing regulations and best practices in the AI industry 1.
The creation of this international standard represents a significant milestone in the evolution of AI governance. It demonstrates the potential for global cooperation in addressing the challenges posed by rapidly advancing AI technologies, even amid complex geopolitical relationships 2.
As AI continues to play an increasingly important role in various sectors, from healthcare to finance, the establishment of such standards could pave the way for more responsible and trustworthy AI systems. This initiative may also inspire further international collaborations in technology governance, fostering a more unified approach to managing the risks and harnessing the benefits of AI on a global scale.
MLCommons, an industry-led AI consortium, has introduced AILuminate, a benchmark for assessing the safety of large language models. This initiative aims to standardize AI safety evaluation and promote responsible AI development.
3 Sources
3 Sources
LatticeFlow, in collaboration with ETH Zurich and INSAIT, has developed the first comprehensive technical interpretation of the EU AI Act for evaluating Large Language Models (LLMs), revealing compliance gaps in popular AI models.
12 Sources
12 Sources
Major tech companies including Google, Microsoft, OpenAI, and Nvidia have joined forces to create the Coalition for Secure AI (CoSAI). This initiative aims to enhance AI safety and security through collaboration and shared research.
9 Sources
9 Sources
China's industry ministry forms an AI standardization committee to develop industry standards for large language models and AI risk assessment, signaling its intent to become a global AI standard-setter.
2 Sources
2 Sources
Major AI companies like OpenAI, Microsoft, and Meta face growing cybersecurity challenges in protecting their large language models from threats such as model pollution and data corruption.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved