Curated by THEOUTPOST
On Sun, 28 Jul, 12:00 AM UTC
2 Sources
[1]
AI News: US Commerce Dept Launches Tool Test Model Risks
The industry's growth has called for more proactive policies recently The increasing demand and popularity of Artificial Intelligence (AI) models have forced authorities in several regions to lay down standards. Introducing an AI tool into such regions would require compliance with such standards. As a result, the United States Commerce Department in collaboration with a few entities have launched Dioptra. Dioptra is a modular, open source web-based tool that the US Commerce Dept re-released in collaboration with the National Institute of Standards and Technology (NIST). I just wanted to let you know that the relaunch of this initiative is coming at a critical time as the AI landscape continues to evolve quickly. The testbed is designed in such a way that it measures the capacity of malicious attacks to impact the performance of an AI system negatively. It is particular about attacks that "poison" AI model training data. Noteworthy, Dioptra's initial release took place two years ago with the focus on helping companies that train AI models and those entities who utilize the innovative technology. Its functionality at the time encompasses assessing, analyzing and tracking AI risks. According to NIST, the tool can be used to benchmark and research models. Also, NIST said "Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra." Like several other tools and AI regulation, Dioptra is an approach towards attaining AI model safety and protection of users. A few days ago, tech behemoths including Google, Microsoft, Nvidia, and OpenAI, took the matter of AI safety more intentionally and launched the Coalition for Secure AI (CoSAI). Precisely, CoSAI is focused on establishing robust security frameworks and standards for AI development and deployment. It is worth noting that these companies have some of the most sophisticated AI models. OpenAI have a suite of AI tools including its newly launched search engine, SearchGPT. Google has Gemini 1.5 Flash as well as other tools while Nvidia remains a thriving chip maker. The consistent launch of these AI models to the public threatens users safety. Governments in different regions are putting effort and resources to checkmate the burgeoning niche. Therefore, the re-release of Dioptra is a reflection of these profound commitment to the industry.
[2]
NIST releases a tool for testing AI model risk
The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has re-released a testbed designed to measure how malicious attacks -- particularly attacks that "poison" AI model training data -- might degrade the performance of an AI system. Called Dioptra (after the classical astronomical and surveying instrument), the modular, open source web-based tool, first released in 2022, seeks to help companies training AI models -- and the people using these models -- assess, analyze and track AI risks. Dioptra can be used to benchmark and research models, NIST says, as well as to provide a common platform for exposing models to simulated threats in a "red-teaming" environment. "Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra," NIST wrote in a press release. "The open source software, like generating child available for free download, could help the community, including government agencies and small to medium-sized businesses, conduct evaluations to assess AI developers' claims about their systems' performance." Dioptra debuted alongside documents from NIST and NIST's recently created AI Safety Institute that lay out ways to mitigate some of the dangers of AI, like how it can be abused to generate nonconsensual pornography. It follows the launch of the U.K. AI Safety Institute's Inspect, a toolset similarly aimed at assessing the capabilities of models and overall model safety. The U.S. and U.K. have an ongoing partnership to jointly develop advanced AI model testing, announced at the U.K.'s AI Safety Summit in Bletchley Park in November of last year. Dioptra is also the product of President Joe Biden's executive order (EO) on AI, which mandates (among other things) that NIST help with AI system testing. The EO, relatedly, also establishes standards for AI safety and security, including requirements for companies developing models (e.g. Apple) to notify the federal government and share results of all safety tests before they're deployed to the public. As we've written about before, AI benchmarks are hard -- not least of which because the most sophisticated AI models today are black boxes whose infrastructure, training data and other key details are kept under wraps by the companies creating them. A report out this month from the Ada Lovelace Institute, a U.K.-based nonprofit research institute that studies AI, found that evaluations alone aren't sufficient to determine the real-world safety of an AI model in part because current policies allow AI vendors to selectively choose which evaluations to conduct. NIST doesn't assert that Dioptra can completely de-risk models. But the agency does propose that Dioptra can shed light on which sorts of attacks might make an AI system perform less effectively and quantify this impact to performance. In a major limitation, however, Dioptra only works out-of-the-box on models that can be downloaded and used locally, like Meta's expanding Llama family. Models gated behind an API, such as OpenAI's GPT-4o, are a no-go -- at least for the time being.
Share
Share
Copy Link
The National Institute of Standards and Technology (NIST) has released a new tool for evaluating AI model risks. This development comes as the cryptocurrency industry grapples with AI integration and regulatory challenges.
In a significant move to address the growing concerns surrounding artificial intelligence, the National Institute of Standards and Technology (NIST) has unveiled a new tool designed to evaluate the risks associated with AI models. This development comes at a crucial time when industries, including the cryptocurrency sector, are rapidly integrating AI technologies into their operations 1.
The NIST tool, which is still in its prototype phase, aims to provide organizations with a standardized method for assessing the potential risks and vulnerabilities of their AI systems. This initiative is part of a broader effort to establish guidelines and best practices for the responsible development and deployment of AI technologies.
The release of this tool has particular significance for the cryptocurrency industry, which has been increasingly leveraging AI for various applications, from trading algorithms to security protocols. As the sector continues to evolve, the need for robust risk assessment mechanisms has become more pressing 2.
Cryptocurrency exchanges and blockchain projects are now faced with the challenge of ensuring that their AI implementations comply with emerging standards and regulations. The NIST tool could provide a valuable framework for these entities to evaluate and mitigate potential risks associated with their AI-driven systems.
The introduction of the NIST tool comes amid a complex regulatory environment for both AI and cryptocurrencies. Governments and regulatory bodies worldwide are grappling with how to effectively oversee these rapidly advancing technologies while fostering innovation.
For the cryptocurrency industry, which already operates in a highly scrutinized regulatory space, the integration of AI adds another layer of complexity. The NIST tool may serve as a bridge between innovation and compliance, helping crypto companies demonstrate due diligence in their AI implementations.
As the NIST tool moves from prototype to full implementation, its adoption across various sectors, including finance and technology, is likely to increase. For the cryptocurrency industry, early adoption of such risk assessment tools could provide a competitive advantage and help build trust with regulators and users alike.
The tool's development also signals a growing recognition of the need for standardized approaches to AI governance. This could potentially lead to more cohesive regulatory frameworks for AI in financial technologies, impacting how cryptocurrency projects develop and deploy AI solutions in the future.
While the NIST tool presents opportunities for better risk management, it also highlights the challenges facing the cryptocurrency industry. Balancing innovation with risk mitigation and regulatory compliance will be crucial for crypto companies looking to leverage AI technologies.
Reference
[2]
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
The US Department of Commerce has proposed new regulations requiring advanced AI and cloud service providers to submit quarterly reports on cybersecurity and artificial intelligence. This move aims to enhance national security and maintain the US's technological edge.
9 Sources
LatticeFlow, in collaboration with ETH Zurich and INSAIT, has developed the first comprehensive technical interpretation of the EU AI Act for evaluating Large Language Models (LLMs), revealing compliance gaps in popular AI models.
12 Sources
MIT researchers have created a database cataloging potential risks associated with artificial intelligence systems. This initiative aims to help developers and policymakers better understand and mitigate AI-related dangers.
3 Sources
OpenAI has published safety scores for its latest AI model, GPT-4, identifying medium-level risks in areas such as privacy violations and copyright infringement. The company aims to increase transparency and address potential concerns about AI safety.
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2024 TheOutpost.AI All rights reserved