3 Sources
3 Sources
[1]
MIT releases comprehensive database of AI risks
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More As research and adoption of artificial intelligence continue to advance at an accelerating pace, so do the risks associated with using AI. To help organizations navigate this complex landscape, researchers from MIT and other institutions have released the AI Risk Repository, a comprehensive database of hundreds of documented risks posed by AI systems. The repository aims to help decision-makers in government, research and industry in assessing the evolving risks of AI. Bringing order to AI risk classification While numerous organizations and researchers have recognized the importance of addressing AI risks, efforts to document and classify these risks have been largely uncoordinated, leading to a fragmented landscape of conflicting classification systems. "We started our project aiming to understand how organizations are responding to the risks from AI," Peter Slattery, incoming postdoc at MIT FutureTech and project lead, told VentureBeat. "We wanted a fully comprehensive overview of AI risks to use as a checklist, but when we looked at the literature, we found that existing risk classifications were like pieces of a jigsaw puzzle: individually interesting and useful, but incomplete." The AI Risk Repository tackles this challenge by consolidating information from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers and reports. This meticulous curation process has resulted in a database of more than 700 unique risks. The repository uses a two-dimensional classification system. First, risks are categorized based on their causes, taking into account the entity responsible (human or AI), the intent (intentional or unintentional), and the timing of the risk (pre-deployment or post-deployment). This causal taxonomy helps to understand the circumstances and mechanisms by which AI risks can arise. Second, risks are classified into seven distinct domains, including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse. The AI Risk Repository is designed to be a living database. It is publicly accessible and organizations can download it for their own use. The research team plans to regularly update the database with new risks, research findings, and emerging trends. Evaluating AI risks for the enterprise The AI Risk Repository is designed to be a practical resource for organizations in different sectors. For organizations developing or deploying AI systems, the repository serves as a valuable checklist for risk assessment and mitigation. "Organizations using AI may benefit from employing the AI Risk Database and taxonomies as a helpful foundation for comprehensively assessing their risk exposure and management," the researchers write. "The taxonomies may also prove helpful for identifying specific behaviors which need to be performed to mitigate specific risks." For example, an organization developing an AI-powered hiring system can use the repository to identify potential risks related to discrimination and bias. A company using AI for content moderation can leverage the "Misinformation" domain to understand the potential risks associated with AI-generated content and develop appropriate safeguards. The research team acknowledges that while the repository offers a comprehensive foundation, organizations will need to tailor their risk assessment and mitigation strategies to their specific contexts. However, having a centralized and well-structured repository like this reduces the likelihood of overlooking critical risks. "We expect the repository to become increasingly useful to enterprises over time," said Neil Thompson, head of the MIT FutureTech Lab. "In future phases of this project, we plan to add new risks and documents and ask experts to review our risks and identify omissions. After the next phase of research, we should be able to provide more useful information about which risks experts are most concerned about (and why) and which risks are most relevant to specific actors (e.g., AI developers versus large users of AI)." Shaping future AI risk research Beyond its practical implications for organizations, the AI Risk Repository is also a valuable resource for AI risk researchers. The database and taxonomies provide a structured framework for synthesizing information, identifying research gaps, and guiding future investigations. "This database can provide a foundation to build on when doing more specific work," Slattery said. "Before this, people like us had two choices. They could invest significant time to review the scattered literature to develop a comprehensive overview, or they could use a limited number of existing frameworks, which might miss relevant risks. Now they have a more comprehensive database, so our repository will hopefully save time and increase oversight. We expect it to be increasingly useful as we add new risks and documents." The research team plans to use the AI Risk Repository as a foundation for the next phase of their own research. "We will use this repository to identify potential gaps or imbalances in how risks are being addressed by organizations," Thompson said. "For example, to explore if there is a disproportionate focus on certain risk categories while others of equal significance are being underaddressed." In the meantime, the research team will update the AI Risk Repository as the AI risk landscape evolves, and they will make sure it remains a useful resource for researchers, policymakers, and industry professionals working on AI risks and risk mitigation.
[2]
AI risks are everywhere - and now MIT is adding them all to one database
Researchers created the AI Risk Repository to consolidate data. One of their findings? Misinformation is the least-addressed AI threat. By now, the risks of artificial intelligence (AI) across applications are well-documented, but hard to access easily in one place when making regulatory, policy, or business decisions. An MIT lab aims to fix that. On Wednesday, MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) launched the AI Risk Repository, a database of more than 700 documented AI risks. According to CSAIL's release, the database is the first of its kind and will be updated consistently to ensure it can be used as an active resource. Also: AI PCs bring new security protections and risks. Here's what users need to know The project was prompted by concerns that global adoption of AI is outrunning how well people and organizations understand implementation risks. Census data indicates that AI usage in US industries climbed from 3.7% to 5.45% -- a 47% increase -- between September 2023 and February 2024. Researchers from CSAIL and MIT's FutureTech Lab found that "even the most thorough individual framework overlooks approximately 30% of the risks identified across all reviewed frameworks," the release states. Fragmented literature on AI risks can make it difficult for policymakers, risk evaluators, and others to get a full picture of the issues in front of them. "It is hard to find specific studies of risk in some niche domains where AI is used, such as weapons and military decision support systems," said Taniel Yusef, a Cambridge research affiliate not associated with the project. "Without referring to these studies, it can be difficult to speak about technical aspects of AI risk to non-technical experts. This repository helps us do that." Without a database, some risks can fly under the radar and not be considered adequately, the team explained in the release. Also: How can seniors learn about AI's benefits and threats? Take a free class - here's how "Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots," Dr. Peter Slattery, a project lead and incoming FutureTech Lab postdoc, said in the release. To address this, researchers at MIT worked with colleagues from other institutions, including the University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence, to create the database. The Repository aims to provide "an accessible overview of the AI risk landscape," according to the site, and act as a universal frame of reference that anyone from researchers and developers to businesses and policymakers can use. To create it, the researchers developed 43 risk classification frameworks by reviewing academic records and databases and speaking to several experts. After distilling more than 700 risks from those 43 frameworks, the researchers categorized each by cause (when or why it occurs), domain, and subdomain (like "Misinformation" and "False or misleading information," respectively). Also: AI-powered 'narrative attacks' a growing threat: 3 defense strategies for business leaders The risks range from discrimination and misrepresentation to fraud, targeted manipulation, and unsafe use. "The most frequently addressed risk domains," the release explains, "included 'AI system safety, failures, and limitations' (76% of documents); 'Socioeconomic and environmental harms' (73%); 'Discrimination and toxicity' (71%); 'Privacy and security' (68%); and 'Malicious actors and misuse' (68%)." Researchers found that human-computer interaction and misinformation were the least-addressed concerns across risk frameworks. Fifty-one percent of the risks analyzed were attributed to AI systems as opposed to humans, who were responsible for 34%, and 65% of risks emerged after AI was deployed, as opposed to during development. Topics like discrimination, privacy breaches, and lack of capability were the most discussed issues, appearing in over 50% of the documents researchers reviewed. Concerns that AI causes damage to our information ecosystems were mentioned far less, in only 12% of documents. MIT hopes the Repository will help decision-makers better navigate and address the risks posed by AI, especially with so many AI governance initiatives emerging rapidly worldwide. The Repository "is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches," said Dr. Neil Thompson, researcher and head of the FutureTech Lab. "We are starting with a comprehensive checklist, to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that's something we should notice and address." Also: Businesses' cloud security fails are 'concerning' - as AI threats accelerate Next, researchers plan to use the Repository to analyze public documents from AI companies and developers to determine and compare risk approaches by sector.
[3]
A new public database lists all the ways AI could go wrong
The database also shows that the majority of risks from AI are identified only after a model becomes accessible to the public. Just 10% of the risks studied were spotted before deployment. These findings may have implications for how we evaluate AI, as we currently tend to focus on ensuring a model is safe before it is launched. "What our database is saying is, the range of risks is substantial, not all of which can be checked ahead of time," says Neil Thompson, director of MIT FutureTech and one of the creators of the database. Therefore, auditors, policymakers, and scientists at labs may want to monitor models after they are launched by regularly reviewing the risks they present post-deployment. There have been many attempts to put together a list like this in the past, but they were concerned primarily with a narrow set of potential harms arising from AI, says Thompson, and the piecemeal approach made it hard to get a comprehensive view of the risks associated with AI. Even with this new database, it's hard to know which AI risks to worry about the most, a task made even more complicated because we don't fully understand how cutting-edge AI systems even work. The database's creators sidestepped that question, choosing not to rank risks by the level of danger they pose. "What we really wanted to do was to have a neutral and comprehensive database, and by neutral, I mean to take everything as presented and be very transparent about that," says the database's lead author, Peter Slattery, a postdoctoral associate at MIT FutureTech. But that tactic could limit the database's usefulness, says Anka Reuel, a PhD student in computer science at Stanford University and member of its Center for AI Safety, who was not involved in the project. She says merely compiling risks associated with AI will soon be insufficient. "They've been very thorough, which is a good starting point for future research efforts, but I think we are reaching a point where making people aware of all the risks is not the main problem anymore," she says. "To me, it's translating those risks. What do we actually need to do to combat [them]?" This database opens the door for future research. Its creators made the list in part to dig into their own questions, like which risks are under-researched or not being tackled. "What we're most worried about is, are there gaps?" says Thompson. "We intend this to be a living database, the start of something. We're very keen to get feedback on this," Slattery says. "We haven't put this out saying, 'We've really figured it out, and everything we've done is going to be perfect.'"
Share
Share
Copy Link
MIT researchers have created a database cataloging potential risks associated with artificial intelligence systems. This initiative aims to help developers and policymakers better understand and mitigate AI-related dangers.
In a significant move to address the growing concerns surrounding artificial intelligence (AI), researchers at the Massachusetts Institute of Technology (MIT) have unveiled a comprehensive database cataloging potential risks associated with AI systems
1
. This initiative, known as the AI Incident Database, aims to provide developers, policymakers, and the public with a centralized resource for understanding and mitigating AI-related dangers.The database, which is publicly accessible, covers a wide range of AI incidents and risks across various domains. It includes examples of AI systems malfunctioning or causing unintended consequences in areas such as healthcare, finance, transportation, and social media
2
. The incidents are categorized based on their severity, impact, and the type of AI system involved.MIT's AI Incident Database is not a static resource but a dynamic platform that encourages contributions from researchers, industry professionals, and the public. This collaborative approach ensures that the database remains up-to-date with the latest incidents and emerging risks in the rapidly evolving field of AI
3
.The creation of this database comes at a crucial time when governments and organizations worldwide are grappling with how to regulate AI technologies. By providing concrete examples of AI risks, the database serves as a valuable tool for policymakers to develop informed regulations and guidelines
1
.For AI developers and researchers, the database offers insights into potential pitfalls and challenges in AI system design and deployment. This knowledge can be instrumental in improving AI safety protocols and ethical considerations during the development process
2
.Related Stories
The AI Incident Database also serves an important educational role. By making information about AI risks accessible to the public, it helps raise awareness about the potential impacts of AI on society. This transparency is crucial for fostering informed public discourse on AI technologies and their implications
3
.As AI continues to advance and integrate into various aspects of our lives, the importance of such a database is likely to grow. However, maintaining the accuracy and relevance of the information while keeping pace with rapid technological developments presents ongoing challenges for the MIT team and contributors to the database
1
.Summarized by
Navi
[1]
[3]
1
Business and Economy
2
Technology
3
Business and Economy