Curated by THEOUTPOST
On Fri, 19 Jul, 12:01 AM UTC
9 Sources
[1]
The Call Is Coming From Inside the House: Tech Giants Form AI Safety Coalition
The Coalition for Secure AI (CoSAI) aims to 'drive responsible development' of AI. The biggest names in AI have joined forces for a new coalition that's intended to enhance trust and security in the use and deployment of artificial intelligence. The Coalition for Secure AI (CoSAI) was announced at the Aspen Security Forum, and it will be hosted by OASIS Open, a nonprofit that promotes the development of open standards. Sponsors include Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, Nvidia, OpenAI, Paypal, and Wiz, who aim "to give all practitioners and developers the guidance and tools they need to create Secure-by-Design AI systems." Key areas of focus will be bolstering AI security systems, addressing challenges in AI development, and developing best practices for AI security. It's a little vague, but The Verge notes that concerns about leaking confidential information and automated discrimination are likely areas the coalition could try to address. "We've been working to pull this coalition together over the past year, in order to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon," said Heather Adkins, Google's VP of Security Engineering. "CoSAI will help organizations, big and small, securely and responsibly integrate AI - helping them leverage its benefits while mitigating risks." In February, Microsoft and Nvidia were among those who agreed to join the US AI Safety Institute Consortium, a group formed as part of President Biden's October executive order calling for AI regulation. This all comes as AI companies are under scrutiny for how they train their AI models. Lawmakers around the world are also examining how best to regulate the industry, which companies like those in CoSAI would like to avoid, or at least shape in some way.
[2]
The biggest names in AI have teamed up to promote AI security
Google, OpenAI, Microsoft, Amazon, Nvidia, Intel, and other big names in AI are coming together to form the Coalition for Secure AI (CoSAI), according to an announcement on Thursday. The initiative aims to address a "fragmented landscape of AI security" by providing access to open-source methodologies, frameworks, and tools. We don't know how much of an impact CoSAI will have on the AI industry, but concerns about leaking confidential information and automated discrimination come to mind as examples of questions faced about security, privacy, and safety for generative AI technology. Other companies joining CoSAI include IBM, PayPal, Cisco, and Anthropic. The CoSAI will exist within the Organization for the Advancement of Structured Information Standards (OASIS), a nonprofit group that promotes the development of open standards. CoSAI will work on three goals to start: developing best practices for AI security, addressing challenges in AI, as well as securing AI applications. "We've been using AI for many years and see the ongoing potential for defenders, but also recognize its opportunities for adversaries," Heather Adkins, Google's vice president of security, says in a statement. "CoSAI will help organizations, big and small, securely and responsibly integrate AI -- helping them leverage its benefits while mitigating risks."
[3]
Google, OpenAI, Microsoft, and more form a new Coalition for Secure AI
Summary The Coalition for Secure AI aims to address AI security risks with a "defender's framework" and rulebook for safe development. The coalition, spearheaded by Google and including major players like Microsoft and OpenAI, focuses on software supply chain security and mitigation strategies. While CoSAI's efforts are commendable, overlap with existing organizations and potential bias concerns may impact its success. The past two years of technological advancement have all been dwarfed by AI going mainstream. OpenAI's ChatGPT was clearly the catalyst of the AI arms race, with giants like Google, Microsoft, Apple, Meta, Samsung, and many more forced to play catch up. The rapid development of AI is a cause for concern. Last year, several public figures and AI researchers penned an open letter to AI labs globally to pause the development of large-scale AI systems, quoting "profound risks to society and humanity." That didn't really go anywhere. Recognizing the critical need for robust measures surrounding AI and its development, Google introduced the Secure AI Framework or SAIF last year. Related AI is risky business, but Google has a new framework to keep things in check Diving headfirst into a pool of security complications can be injurious Building on it, the tech giant is now introducing a new coalition with all the big shots in tow. At Aspen Security Forum today, Thursday, July 18, Google made its Coalition for Secure AI (CoSAI) official, stating that it has been steadfast at pulling the team together over the past year in an effort to "advance comprehensive security measures for addressing the unique risks that come with AI," both short term (those that arise in real time) and long term (those looming). CoSAI's stacked lineup of founding members includes Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, PayPal, and Wiz. Google detailed the coalition and its plans in a blog post, and there's also a dedicated website for CoSAI. Source: CoSAI CoSAI's initial plans In its blog post, Google said a whole lot of nothing technical jargon, like the CoSAI focusing on "Software Supply Chain Security for AI systems," which will essentially ensure that AI code is built using safe and reliable software, track how AI software is built, and identify problems early on. The coalition also aims to create safeguards or "defender's framework" with tools to identify and fight AI security threats as they come up. "This workstream will develop a defender's framework to help defenders identify investments and mitigation techniques to address the security impact of AI use. The framework will scale mitigation strategies with the emergence of offensive cybersecurity advancements in AI models," reads the blog post. Lastly, CoSAI intends to create a rulebook that defines how to develop AI and ensure its safe use, with checklists and scorecards to "guide practitioners in readiness assessments." Source: CoSAI CoSAI's efforts seem to be focused mainly on the security aspect of AI. While its initiative is commendable, and the need of the hour, it might potentially be redundant. Previously formed organizations like the Frontier Model Forum, and Partnership on AI already have roles that overlap with CoSAI's plans. Further, CoSAI has all the big players, which might be a double-edged sword. A coalition of all the big shots does somewhat help the cause, considering that there wouldn't be a lack of resources needed act upon its plans. However, it might also raise questions about bias with CoSAI favoring its members, and protecting its own AI interests. The coalition can minimize questions related to bias by being transparent about its decisions, though we'll have to wait and see how things play out.
[4]
Google, Microsoft, Nvidia, OpenAI Launches CoSAI For AI Safety
Apart from Google, founding members include Amazon, Anthropic, Cisco, Cohere, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal, and Wiz. Tech behemoths Google, Microsoft, Nvidia, and OpenAI, among others, have launched the Coalition for Secure AI (CoSAI) to address AI safety concerns. Announced at the Aspen Security Forum, CoSAI aims to establish robust security frameworks and standards for AI development and deployment. This initiative comes at a critical time as the AI landscape continues to evolve rapidly. CoSAI, spearheaded by Google, brings together leading tech companies and organizations to tackle AI security challenges. According to the announcement, founding members include Amazon, Anthropic, Cisco, Cohere, IBM, Intel, Microsoft, Nvidia, OpenAI, PayPal, and Wiz. Notably, this coalition aims to create secure-by-design AI systems, leveraging open-source methodologies and standardized frameworks to foster trust and security in AI. The announcement highlighted the importance of a security framework for artificial intelligence (AI), building on their previously introduced Secure AI Framework (SAIF). CoSAI aims to advance comprehensive security measures for AI, addressing both current and emerging risks. Besides, the coalition will focus on three initial workstreams: software supply chain security for AI systems, preparing defenders for a changing cybersecurity landscape, and AI security governance. These workstreams aim to develop best practices, risk assessment frameworks, and mitigation strategies to enhance AI security across the industry. Also Read: Fed To Mirror ECB Rate Pause? Here's What It Means For Bitcoin CoSAI's establishment addresses the fragmented landscape of AI security. Currently, developers face inconsistent and siloed guidelines, making it challenging to assess and mitigate AI-specific risks. However, CoSAI aims to standardize practices and enhance security measures, building trust among stakeholders globally. David LaBianca, CoSAI Governing Board co-chair from Google, emphasized the necessity of democratizing knowledge and advancements for secure AI integration. He stated: CoSAI's establishment was rooted in the necessity of democratizing the knowledge and advancements essential for the secure integration and deployment of AI. Omar Santos from Cisco echoed this sentiment, highlighting the importance of collaboration among leading companies and experts to develop robust AI security standards. CoSAI's open-source community welcomes technical contributions from all interested parties. OASIS, the global standards body hosting CoSAI, also invites additional sponsorship support from companies involved in AI development and deployment. As AI technology continues to advance, CoSAI's role in establishing standardized security practices becomes increasingly vital. By addressing the unique risks associated with AI systems, CoSAI aims to ensure that AI development and deployment are conducted responsibly and securely. This coalition between the tech giants marks a significant step forward in the quest for safe and trustworthy AI.
[5]
Introducing the Coalition for Secure AI (CoSAI) and founding member organizations
AI needs a security framework and applied standards that can keep pace with its rapid growth. That's why last year we shared the Secure AI Framework (SAIF), knowing that it was just the first step. Of course, to operationalize any industry framework requires close collaboration with others -- and above all a forum to make that happen. Today at the Aspen Security Forum, alongside our industry peers, we're introducing the Coalition for Secure AI (CoSAI). We've been working to pull this coalition together over the past year, in order to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon. CoSAI includes founding members Amazon, Anthropic, Chainguard, Cisco, Cohere, GenLab, IBM, Intel, Microsoft, NVIDIA, OpenAI, Paypal and Wiz -- and it will be housed under OASIS Open, the international standards and open source consortium. As individuals, developers and companies continue their work to adopt common security standards and best practices, CoSAI will support this collective investment in AI security. Today, we're also sharing the first three areas of focus the coalition will tackle in collaboration with industry and academia: Additionally, CoSAI will collaborate with organizations such as Frontier Model Forum, Partnership on AI, Open Source Security Foundation and ML Commons to advance responsible AI. As AI advances, we're committed to ensuring effective risk management strategies evolve along with it. We're encouraged by the industry support we've seen over the past year for making AI safe and secure. We're even more encouraged by the action we're seeing from developers, experts and companies big and small to help organizations securely implement, train and use AI. AI developers need -- and end users deserve -- a framework for AI security that meets the moment and responsibly captures the opportunity in front of us. CoSAI is the next step in that journey and we can expect more updates in the coming months. To learn how you can support CoSAI, you can visit coalitionforsecureai.org. In the meantime, you can visit our Secure AI Framework page to learn more about Google's AI security work.
[6]
OpenAI, Nvidia, Google and others launch AI cybersecurity consortium - SiliconANGLE
OpenAI, Nvidia, Google and others launch AI cybersecurity consortium More than a dozen tech firms have teamed up to launch an industry group dedicated to making artificial intelligence applications more secure. The Coalition for Secure AI, or CoSAI, was announced today at the Aspen Security Forum. It will operate under the wing of OASIS, a nonprofit that oversees the development of several dozen open-source software projects. Many of those projects focus on easing cybersecurity tasks such automating breach response workflows. CoSAI's founding members include OpenAI and Anthropic PBC, the two best-funded startups in the large language model ecosystem, as well as rivals Cohere Inc. and GenLab. In the public cloud market, the consortium is backed by Amazon Web Services Inc., Microsoft Corp. and Google LLC. They are joined by Nvidia Corp., Intel Corp., IBM Corp., Cisco Systems Inc., PayPal Holdings Inc., Wiz Inc. and Chainguard Inc. CoSAI is launching with two main objectives. The first is to develop tools and technical guidance that will help organizations secure their AI applications. According to the group's backers, the other goal is to create an ecosystem where companies can share AI-related cybersecurity best practices and technologies. "CoSAI's establishment was rooted in the necessity of democratizing the knowledge and advancements essential for the secure integration and deployment of AI," said David LaBianca, the co-chair of CoSAI's governing board. "With the help of OASIS Open, we're looking forward to continuing this work and collaboration among leading companies, experts, and academia." CoSAI is launching three open-source workstreams, or initiatives, to advance those goals. Each project tackles a different subset of the tasks involved in securing AI applications. According to CoSAI, the first initiative is designed to help software teams scan their machine learning workloads for cybersecurity risks. To that end, the consortium will develop a taxonomy of common vulnerabilities and ways to address them. CoSAI members will also create a cybersecurity scorecard designed to help developers monitor AI systems for vulnerabilities and report any issues they find to other stakeholders. According to CoSAI, its second inaugural project seeks to ease the task of mitigating AI cybersecurity risks. The goal is to simplify the process of identifying "investments and mitigation techniques to address the security impact of AI use," Google cybersecurity executives Heather Adkins and Phil Venables wrote in a blog post today. The third initiative that CoSAI detailed today focuses on addressing software supply chain risks. Those are vulnerabilities caused by software components that a company sources from external sources such as GitHub repositories. Before developers can analyze an AI application's external components for vulnerabilities, they must map out what external components it includes. That can be a time-consuming process in large software projects with a significant number of code files. One of CoSAI's priorities will be to ease the workflow. In parallel, the consortium's members will develop ways to address the cybersecurity risks associated with third-party AI models. Many AI application projects rely on neural networks from the open-source ecosystem because building a custom algorithm can be prohibitively expensive. In theory, an external neural network can introduce vulnerabilities into a software project that might enable hackers to launch cyberattacks. CoSAI plans to launch additional cybersecurity initiatives in the future. The initiatives will be supervised by a technical steering committee of AI experts from the private sector and academia.
[7]
Introducing the Coalition for Secure AI (CoSAI)
Today, I am delighted to share the launch of the Coalition for Secure AI (CoSAI). CoSAI is an alliance of industry leaders, researchers, and developers dedicated to enhancing the security of AI implementations. CoSAI operates under the auspices of OASIS Open, the international standards and open-source consortium. CoSAI's founding members include industry leaders such as OpenAI, Anthropic, Amazon, Cisco, Cohere, GenLab, Google, IBM, Intel, Microsoft, Nvidia, and PayPal. Together, our goal is to create a future where technology is not only cutting-edge but also secure-by-default. CoSAI complements existing AI initiatives by focusing on how to integrate and leverage AI securely across organizations of all sizes and throughout all phases of development and usage. CoSAI collaborates with NIST, Open-Source Security Foundation (OpenSSF), and other stakeholders through collaborative AI security research, best practice sharing, and joint open-source initiatives. CoSAI's scope includes securely building, deploying, and operating AI systems to mitigate AI-specific security risks such as model manipulation, model theft, data poisoning, prompt injection, and confidential data extraction. We must equip practitioners with integrated security solutions, enabling them to leverage state-of-the-art AI controls without needing to become experts in every facet of AI security. Where possible, CoSAI will collaborate with other organizations driving technical advancements in responsible and secure AI, including the Frontier Model Forum, Partnership on AI, OpenSSF, and ML Commons. Members, such as Google with its Secure AI Framework (SAIF), may contribute existing work in terms of thought leadership, research, best practices, projects, or open-source tools to enhance the partner ecosystem. Securing AI remains a fragmented effort, with developers, implementors, and users often facing inconsistent and siloed guidelines. Assessing and mitigating AI-specific risks without clear best practices and standardized approaches is a challenge, even for the most experienced organizations. Security requires collective action, and the best way to secure AI is with AI. To participate safely in the digital ecosystem -- and secure it for everyone -- individuals, developers, and companies alike need to adopt common security standards and best practices. AI is no exception. CoSAI will collaborate with industry and academia to address key AI security issues. Our initial workstreams include AI and software supply chain security and preparing defenders for a changing cyber landscape. CoSAI's diverse stakeholders from leading tech companies invest in AI security research, shares security expertise and best practices, and builds technical open-source solutions and methodologies for secure AI development and deployment. CoSAI is moving forward to create a safer AI ecosystem, building trust in AI technologies and ensuring their secure integration across all organizations. The security challenges arising from AI are complicated and dynamic. We are confident that this coalition of technology leaders is well-positioned to make a significant impact in enhancing the security of AI implementations.
[8]
Google, Microsoft and other tech giants are now working together on AI security standards
Why it matters: The promise of the group is that each company will follow the same rigorous security standards for their projects to keep malicious hackers at-bay. Driving the news: Google announced the new Coalition For Secure AI during the Aspen Security Forum happening in Colorado this week. Zoom in: The new coalition will start its work by developing standards for software supply chain security for AI systems, compiling resources to measure the risk of these tools and pulling together a framework to help defenders determine the best use cases for AI in their work. What they're saying: "This is the industry coming together and doing this as a coalition -- it's not an executive order, it's not a regulation," Heather Adkins, vice president of security engineering at Google, said at an Aspen Security Forum event. Reality check: Many of the participating companies had either already developed their own standards for securing AI or were working on them. The intrigue: This is the first time the industry has come together to work on these AI security issues together. What's next: The coalition is actively accepting new members, Adkins said at the Aspen event.
[9]
Intel : Welcomes the Coalition for Secure AI
With artificial intelligence (AI) rapidly transforming our world, developers and adopters face the challenge of securing AI technology while navigating guidelines and standards that are often inconsistent and siloed. As developers work through these challenges, it's critical to develop and share practices that keep security at the forefront. The future of security requires collective action, and AI is no exception. At Intel, we're no stranger to driving new technology adoption across every industry. We see the opportunities and share in the challenges our customers and partners face. And we know the importance of rapidly developing standards and best practices to simplify the use of new innovations. Security is an essential element across our entire product portfolio, and we look forward to bringing our industry-leading security assurance expertise to help everyone improve new AI solutions. At Intel Vision 2024, CEO Pat Gelsinger outlined a strategy for open scalable AI systems, including hardware, software, frameworks and tools. Taking another step forward in that journey, Intel has joined the Coalition for Secure AI (CoSAI) as a founding member alongside Google, IBM and other organizations. CoSAI, hosted by open source global standards body OASIS Open, is an initiative designed to give all practitioners and developers the guidance and tools they need to create AI systems that are secure-by-design. This is a crucial collaborative effort for the industry, bringing together a diverse global group of leaders across companies, academia and other relevant fields who will work together to develop and share holistic approaches, best practices, tools and methodologies for secure AI development and deployment. Initially in this effort, CoSAI's contributors will collaborate on three key work streams: As part of Intel's commitment to advancing AI technology responsibly, we will continue to collaborate with industry partners on innovative approaches to address security, transparency and trust. CoSAI also complements the recent introduction of the Linux Foundation AI & Data's latest Sandbox Project: the Open Platform for Enterprise AI (OPEA). In addition to CoSAI, Intel is a founding member of OPEA, which is designed to help accelerate secure, cost-effective generative AI (GenAI) deployments for businesses by driving interoperability across a diverse and heterogeneous ecosystem, starting with retrieval-augmented generation (RAG). As the market responds to the insatiable demand for AI, technology vendors must remain committed to open solutions that provide choice while also driving standards for security that best protects users. And at Intel, we will continue delivering and improving on the product security needed to help secure the development and deployment of AI. More: View the full announcement's news release. | For more information, visit the CoSAI website.
Share
Share
Copy Link
Major tech companies including Google, Microsoft, OpenAI, and Nvidia have joined forces to create the Coalition for Secure AI (CoSAI). This initiative aims to enhance AI safety and security through collaboration and shared research.
In a significant move towards ensuring the responsible development of artificial intelligence, several tech industry giants have come together to form the Coalition for Secure AI (CoSAI). The coalition, announced on July 18, 2024, includes prominent names such as Google, Microsoft, OpenAI, and Nvidia, among others 12.
CoSAI's primary goal is to address the growing concerns surrounding AI safety and security. The coalition plans to focus on various critical aspects of AI development and deployment:
The founding members of CoSAI include some of the most influential players in the tech industry. While Google, Microsoft, OpenAI, and Nvidia are at the forefront, other companies such as Anthropic and Cohere have also joined the initiative 4. The coalition is structured as a non-profit organization, emphasizing its commitment to the greater good of the AI community and society at large.
One of the key aspects of CoSAI is its focus on collaborative research and development. Member organizations have pledged to share resources, including:
The formation of CoSAI marks a significant milestone in the AI industry's efforts to self-regulate and address potential risks associated with advanced AI technologies. By bringing together competitors in a collaborative environment, the coalition aims to create a more secure and trustworthy AI ecosystem for developers, businesses, and end-users alike.
As AI continues to evolve and integrate into various aspects of our lives, initiatives like CoSAI are expected to play a crucial role in shaping the future of AI governance and security standards. The success of this coalition could potentially influence regulatory frameworks and public perception of AI technologies in the coming years.
Reference
[3]
A coalition of over 60 tech companies, nonprofits, and academic institutions are calling on Congress to pass legislation authorizing the U.S. AI Safety Institute within NIST before the end of 2024, citing concerns about national competitiveness and AI safety.
4 Sources
4 Sources
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
5 Sources
OpenAI, the creator of ChatGPT, has announced a partnership with the U.S. AI Safety Institute. The company commits to providing early access to its future AI models and emphasizes its dedication to AI safety in a letter to U.S. lawmakers.
3 Sources
3 Sources
Antitrust watchdogs from the US, UK, and EU have joined forces to address potential monopolistic practices in the rapidly evolving AI industry. This collaborative effort aims to ensure fair competition and prevent market dominance by tech giants.
6 Sources
6 Sources
OpenAI has announced the creation of a new independent board to oversee the safety and ethical implications of its AI technologies. This move comes as the company aims to address growing concerns about AI development and its potential risks.
15 Sources
15 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved