2 Sources
2 Sources
[1]
Korean researchers propose international standards to ensure AI safety and trustworthiness
As artificial intelligence (AI) technology rapidly pervades our lives and industries, ensuring its safety and trustworthiness is a global challenge. In this context, Korean researchers are gaining traction by leading the development of two key international standards. The Electronics and Telecommunications Research Institute (ETRI) has proposed the "AI Red Team Testing" standard, which aims to proactively identify risks in AI systems, and the "Trustworthiness Fact Label (TFL)" standard, which aims to help consumers easily understand the authenticity level of AI, to the International Organization for Standardization (ISO/IEC) and has begun full-scale development. With this, Korea has elevated its status beyond being a mere "fast follower" of technology to a "first mover," setting the rules for the AI era. "AI Red Team Testing" is a method of aggressively exploring and testing how secure an AI system is. For example, it proactively identifies situations where generative AI may produce incorrect information or be exploited to circumvent user protections. ETRI serves as the editor of ISO/IEC 42119-7, the international standard in this field, creating common international test procedures and methods that can be applied across a wide range of fields, including medicine, finance, and national defense. Meanwhile, ETRI, together with the Ministry of Food and Drug Safety, hosted the first "Advanced AI Digital Medical Product Red Team Challenge and Technology Workshop" in Korea at the Novotel Seoul Dongdaemun Hotel on September 4th and 5th. This challenge is the first event of its kind in Asia and Korea for advanced AI medical devices, where medical professionals, security experts, and the general public will participate to examine the biases and risks of AI. ETRI is also developing a medical-specific red team evaluation methodology in collaboration with Seoul Asan Medical Center, and will create a red team test system for digital medical products that applies advanced AI technology and conduct empirical testing. In addition, it has formed a council with major companies such as STA, NAVER, Upstage, SelectStar, KT, and LG AI Research Institute to strengthen cooperation on international AI red team standardization. Another key standard is trustworthiness fact labels (TFLs) The label is an at-a-glance visualization of how trustworthy an AI system is, providing transparent information to consumers, much like a nutrition label on a food product. ETRI is leading the development of the ISO/IEC 42117 series of standards, which can be operationalized in a variety of ways, with companies providing the information themselves or having it verified and certified by a third-party organization. In the future, ETRI is even considering incorporating ESG factors, such as AI's carbon footprint. In conjunction with the AI Management System Standard (ISO/IEC 42001), which is used as an international certification standard for organizations using AI, the standard will serve as a framework to demonstrate how trustworthy the developed products and services are. These two standards align with the government's Sovereign AI and AI G3 leapfrog strategies. This is evaluated as an example that goes beyond simply securing technological prowess and makes a practical contribution to the competition for leadership in creating global AI rules. Just as the National Institute of Standards and Technology (NIST) in the United States supports national and international standardization to realize the national AI strategy, ETRI envisions supporting the realization of the national AI strategy by developing AI security technologies and leading international standardization of AI safety and trustworthiness, including the activities of the AI Safety Research Institute. Kim Wook, PM of the Institute of Information & Communications Technology Planning & Evaluation (IITP), said, "Providing AI safety and trustworthiness will make it easier for everyone to use AI, and leading the way in international standards this time is a turning point toward becoming a country that leads AI norms." Lee Seung Yun, Assistant Vice President of ETRI's Standards & Open Source Research Division, also said, "AI red team testing and trustworthiness labels are key technical elements included in AI regulatory policies in the U.S., EU, and other countries, and these international standards will serve as common criteria for evaluating the safety and trustworthiness of AI systems around the world. "ETRI will continue to lead international standardization in the field of AI safety and trustworthiness, making Korea the center of excellence for not only Sovereign AI but also Sovereign AI safety technologies."
[2]
ETRI Leads International Standards to Ensure AI Safety and Trustworthiness | Newswise
Types and Contents of Trustworthy Fact Label (TFL), and Target Applications Newswise -- As artificial intelligence (AI) technology rapidly pervades our lives and industries, ensuring its safety and trustworthiness is a global challenge. In this context, Korean researchers are gaining traction by leading the development of two key international standards. Electronics and Telecommunications Research Institute (ETRI) announced that it has proposed the "AI Red Team Testing" standard, which aims to proactively identify risks in AI systems, and the "Trustworthiness Fact Label (TFL)" standard, which aims to help consumers easily understand the authenticity level of AI, to the International Organization for Standardization (ISO/IEC) and has begun full-scale development. With this, Korea has elevated its status beyond being a mere "fast follower" of technology to a "first mover," setting the rules for the AI era. 'AI Red Team Testing' is a method of aggressively exploring and testing how secure an AI system is. For example, it proactively identifies situations where generative AI may produce incorrect information or be exploited to circumvent user protections. ETRI serves as the editor of ISO/IEC 42119-7, the international standard in this field, creating common international test procedures and methods that can be applied across a wide range of fields, including medicine, finance, and national defense. Meanwhile, ETRI, together with the Ministry of Food and Drug Safety, has hosted the first 'Advanced AI Digital Medical Product Red Team Challenge and Technology Workshop' in Korea at the Novotel Seoul Dongdaemun Hotel on September 4th and 5th. This challenge is the first event of its kind in Asia and Korea for advanced AI medical devices, where medical professionals, security experts, and the general public will participate to examine the biases and risks of AI. ETRI is also developing a medical-specific red team evaluation methodology in collaboration with Seoul Asan Medical Center, and will create a red team test system for digital medical products that applies advanced AI technology and conduct empirical testing. In addition, it has formed a council with major companies such as STA, NAVER, Upstage, SelectStar, KT, and LG AI Research Institute to strengthen cooperation on international AI red team standardization. Another key standard is Trustworthiness Fact Labels (TFLs). The label is an at-a-glance visualization of how trustworthy an AI system is, providing transparent information to consumers, much like a nutrition label on a food product. ETRI is leading the development of the ISO/IEC 42117 series of standards, which can be operationalized in a variety of ways, with companies providing the information themselves or having it verified and certified by a third-party organization. In the future, ETRI is even considering incorporating ESG factors, such as AI's carbon footprint. In conjunction with the AI Management System Standard (ISO/IEC 42001), which is used as an international certification standard for organizations using AI, the standard will serve as a framework to demonstrate how trustworthy the developed products and services are. These two standards align with the government's Sovereign AI and AI G3 leapfrog strategies. This is evaluated as an example that goes beyond simply securing technological prowess and makes a practical contribution to the competition for leadership in creating global AI rules. Just as the National Institute of Standards and Technology (NIST) in the United States supports national and international standardization to realize the national AI strategy, ETRI envisions supporting the realization of the national AI strategy by developing AI security technologies and leading international standardization of AI safety and trustworthiness, including the activities of the AI Safety Research Institute. Kim Wook, PM of the Institute of Information & Communications Technology Planning & Evaluation (IITP), said, "Providing AI safety and trustworthiness will make it easier for everyone to use AI, and leading the way in international standards this time is a turning point toward becoming a country that leads AI norms." Lee Seung Yun, Assistant Vice President of ETRI's Standards & Open Source Research Division, also said, "AI red team testing and trustworthiness labels are key technical elements included in AI regulatory policies in the U.S., EU, and other countries, and these international standards will serve as common criteria for evaluating the safety and trustworthiness of AI systems around the world. ETRI will continue to lead international standardization in the field of AI safety and trustworthiness, making Korea the center of excellence for not only Sovereign AI but also Sovereign AI safety technologies." ### This achievement was made through the "Development of Advanced Artificial Intelligence Safety and Trustworthiness Testing and Evaluation Technology Standards" project supported by the Ministry of Science and ICT and the Institute of Information & Communications Technology Planning & Evaluation (IITP), and the "Development and Verification of Generative AI Digital Medical Product Red Team Testing and Evaluation Technology" project of the Ministry of Food and Drug Safety Consortium. ETRI is a non-profit government-funded research institute. Since its foundation in 1976, ETRI, a global ICT research institute, has been making its immense effort to provide Korea a remarkable growth in the field of ICT industry. ETRI delivers Korea as one of the top ICT nations in the World, by unceasingly developing world's first and best technologies.
Share
Share
Copy Link
Korean researchers at ETRI are spearheading two crucial international AI safety standards - AI Red Team Testing for proactive risk identification and Trustworthiness Fact Labels for consumer transparency, positioning Korea as a first mover in global AI governance.
South Korea is positioning itself as a global leader in artificial intelligence safety and governance through groundbreaking international standardization efforts. The Electronics and Telecommunications Research Institute (ETRI) has proposed two critical standards to the International Organization for Standardization (ISO/IEC): AI Red Team Testing and Trustworthiness Fact Labels, marking Korea's transition from a technology "fast follower" to a "first mover" in setting global AI rules
1
.The AI Red Team Testing standard, designated as ISO/IEC 42119-7, represents a proactive approach to identifying vulnerabilities in AI systems. This methodology involves aggressively exploring and testing AI security by identifying scenarios where generative AI might produce incorrect information or be exploited to circumvent user protections
1
.
Source: Tech Xplore
ETRI serves as the editor for this international standard, developing common test procedures and methods applicable across diverse sectors including medicine, finance, and national defense. The institute has established partnerships with major Korean companies including STA, NAVER, Upstage, SelectStar, KT, and LG AI Research Institute to strengthen cooperation on international AI red team standardization
2
.In a significant milestone for Asian AI safety, ETRI collaborated with the Ministry of Food and Drug Safety to host Korea's first "Advanced AI Digital Medical Product Red Team Challenge and Technology Workshop" at the Novotel Seoul Dongdaemun Hotel on September 4-5. This event marked the first of its kind in Asia for advanced AI medical devices, bringing together medical professionals, security experts, and the general public to examine AI biases and risks
1
.ETRI is developing medical-specific red team evaluation methodologies in collaboration with Seoul Asan Medical Center, creating comprehensive test systems for digital medical products that incorporate advanced AI technology and conducting empirical testing to ensure safety standards
2
.Related Stories
The second major standard, Trustworthiness Fact Labels (TFL), functions similarly to nutrition labels on food products, providing consumers with at-a-glance visualization of AI system trustworthiness. ETRI leads the development of the ISO/IEC 42117 series, which can be implemented through company self-reporting or third-party verification and certification
1
.The TFL standard is designed to work in conjunction with the AI Management System Standard (ISO/IEC 42001), serving as a framework for organizations to demonstrate the trustworthiness of their AI-powered products and services. ETRI is considering incorporating Environmental, Social, and Governance (ESG) factors, including AI's carbon footprint, into future iterations of the standard
2
.These standardization efforts align with Korea's Sovereign AI and AI G3 leapfrog strategies, representing a shift from merely securing technological capabilities to actively contributing to global AI governance leadership. The initiative parallels the role of the National Institute of Standards and Technology (NIST) in the United States, supporting national AI strategy through international standardization efforts
1
.Kim Wook, Project Manager at the Institute of Information & Communications Technology Planning & Evaluation (IITP), emphasized that "providing AI safety and trustworthiness will make it easier for everyone to use AI, and leading the way in international standards this time is a turning point toward becoming a country that leads AI norms"
2
.Lee Seung Yun, Assistant Vice President of ETRI's Standards & Open Source Research Division, noted that these standards address key technical elements in AI regulatory policies across the United States, European Union, and other nations, positioning them as common criteria for global AI safety and trustworthiness evaluation
1
.Summarized by
Navi
[1]
28 Nov 2024•Policy and Regulation

07 Sept 2024

08 May 2025•Policy and Regulation
