3 Sources
[1]
Korea's AI safety institute officially launches
Kim Myuhng-joo, the Inaugural chief of AI Safety Institute, speaks during an opening ceremony in Pangyo, Gyeonggi Province, Nov. 27. Yonhap Korea's artificial intelligence safety institute officially opened its doors Wednesday with an aim to respond to various risks associated with the fast-evolving technology, the science ministry said. The new AI Safety Institute comes as a follow-up to the AI Seoul Summit held here in May, where leaders from Korea, Britain and eight other nations adopted a joint declaration on promoting safe, innovative and inclusive AI, according to the Ministry of Science and ICT. The institute, located in Pangyo, just south of Seoul, will spearhead the country's research on risks involving AI, such as abuse and loss of control of the technology, while also serving as a network hub for the industry, academics and research institutes studying AI safety. It will also take part in the international network of AI safety institutes to promote global discussions on the agenda, the ministry said. Kim Myuhng-joo, an information security professor at Seoul Women's University, has been appointed as the inaugural chief of the institute. "The institute will play a role as an organization supporting local AI companies to minimize risk factors that can hinder the firms from strengthening their global competitiveness, rather than working as a regulatory body," Kim said during the institute's opening ceremony. In the ceremony, a consortium of AI-related firms, universities and institutes signed a memorandum of understanding to join hands for research, policymaking and evaluation of AI safety. A total of 24 entities, including leading tech firms like Naver, KT and Kakao, as well as top universities like Seoul National University, the Korea Advanced Institute of Science and Technology, and Korea University, are initial members of the consortium, according to the ministry. After the ceremony, Kim told reporters he held discussions with European AI safety officials at the International Network of AI Safety Institutes in San Francisco last week on possible ways to help Korean firms to receive regulatory exemption when entering the European market for complying with local AI-related laws. Korea has also decided to join Japan and Singapore for research on the phenomenon of AI giving different answers to the same questions under different languages and cultural contexts, according to Kim. With regard to the international network of AI safety institutes, Kim said the United States has been named the chair of the network, with the vice chair yet to be decided. (Yonhap)
[2]
Korea's AI safety institute officially launches
Kim Myuhng-joo, the Inaugural chief of AI Safety Institute, speaks during an opening ceremony in Pangyo, Gyeonggi Province, Nov. 27. Yonhap Korea's artificial intelligence safety institute officially opened its doors Wednesday with an aim to respond to various risks associated with the fast-evolving technology, the science ministry said. The new AI Safety Institute comes as a follow-up to the AI Seoul Summit held here in May, where leaders from Korea, Britain and eight other nations adopted a joint declaration on promoting safe, innovative and inclusive AI, according to the Ministry of Science and ICT. The institute, located in Pangyo, just south of Seoul, will spearhead the country's research on risks involving AI, such as abuse and loss of control of the technology, while also serving as a network hub for the industry, academics and research institutes studying AI safety. It will also take part in the international network of AI safety institutes to promote global discussions on the agenda, the ministry said. Kim Myuhng-joo, an information security professor at Seoul Women's University, has been appointed as the inaugural chief of the institute. "The institute will play a role as an organization supporting local AI companies to minimize risk factors that can hinder the firms from strengthening their global competitiveness, rather than working as a regulatory body," Kim said during the institute's opening ceremony. In the ceremony, a consortium of AI-related firms, universities and institutes signed a memorandum of understanding to join hands for research, policymaking and evaluation of AI safety. A total of 24 entities, including leading tech firms like Naver, KT and Kakao, as well as top universities like Seoul National University, the Korea Advanced Institute of Science and Technology, and Korea University, are initial members of the consortium, according to the ministry. After the ceremony, Kim told reporters he held discussions with European AI safety officials at the International Network of AI Safety Institutes in San Francisco last week on possible ways to help Korean firms to receive regulatory exemption when entering the European market for complying with local AI-related laws. Korea has also decided to join Japan and Singapore for research on the phenomenon of AI giving different answers to the same questions under different languages and cultural contexts, according to Kim. With regard to the international network of AI safety institutes, Kim said the United States has been named the chair of the network, with the vice chair yet to be decided. (Yonhap)
[3]
AI Safety Institute Launched as Korea's AI Research Hub | Newswise
Newswise -- The Ministry of Science and ICT (MSIT), headed by Minister Yoo Sang-im, held the launch ceremony for the "AI Safety Institute" (AISI) on Wednesday, November 27, at the Pangyo Global R&D Center. At the "AI Seoul Summit"last May, leaders from 10 countries recognized safety as a key component of responsible AI innovation and emphasized the importance of establishing AI safety institutes and fostering global collaboration for safe AI. President Yoon Suk Yeol also expressed his commitment, stating, "We will work towards establishing an AI safety institute in Korea and actively participate in a global network to enhance AI safety." After thorough preparations regarding the institute's organization, budget, personnel, and functions, the AI Safety Institute has now been officially launched. The AISI is a dedicated organization established within ETRI to systematically and professionally address various AI risks, including technological limitations, human misuse, and potential loss of control over AI. As Korea's hub for AI safety research, the AISI will facilitate collaborative research and information sharing among industry, academia, and research institutes in the field of AI safety. Furthermore, as a member of the "International Network of AI Safety Institutes" (comprising 10 countries, launched on November 21), the AISI is committed to taking a responsible role in strengthening global collaboration for safe AI. Through these efforts, the AISI aims to develop competitive technologies, nurture skilled professionals in the AI safety sector, and advance AI safety policies, including their development and refinement, based on scientific research data. The launch ceremony brought togetherkey government officials, including Yoo Sang-im, Minister of Science and ICT; Yeom Jae-ho, Vice Chair of the National AI Committee; and Lee Kyung-woo, Presidential Secretary for AI and Digital. Over 40 prominent figures from the AI industry, academia, and research sectors also attended, such as Bae Kyung-hoon, Chief of LG AI Research; Oh Hye-yeon, Director of the KAIST AI Institute; Lee Eun-ju, Director of the Center for Trustworthy AI at Seoul National University; and Bang Seung-chan, President of the Electronics and Telecommunications Research Institute (ETRI). At the event, Professor Yoshua Bengio, a globally renowned AI scholar and Global Advisor to the National AI Committee, congratulated the Korean government on establishing the AI Safety Institute in alignment with the Seoul Declaration. He emphasized the Institute's critical roles, including (1) researching and advancing risk assessment methodologies through industry collaboration, (2) supporting the development of AI safety requirements, and (3) fostering international cooperation to harmonize global AI safety standards. Additionally, the directors of AI safety institutes from the United States, the United Kingdom, and Japan delivered congratulatory speeches, stating, "We have high expectations for Korea's AI Safety Institute" and emphasizing "the importance of global collaboration in AI safety." Kim Myung-joo, the inaugural Director of the AISI, outlined the Institute's vision and operational plans during the ceremony. In his presentation, he stated, "The AISI will focus on evaluating potential risks that may arise from AI utilization, developingand disseminating policies and technologies to prevent and minimize these risks, and strengthening collaboration both domestically and internationally." Director Kim emphasized, "The AISI is not a regulatory body but a collaborative organization dedicated to supporting Korean AI companies by reducing risk factors that hinder their global competitiveness." At the signing ceremony for the "Korea AI Safety Consortium" (hereinafter referred to as the "Consortium"), 24leading Korean organizations from industry, academia, and research sectors signed a Memorandum of Understanding (MOU) to promote mutual cooperation in AI safety policy research, evaluation, and R&D. The AISI and Consortium member organizations will jointly focus on key initiatives, including the research, development, and validation of an AI safety framework (risk identification, evaluation, and mitigation), policy research to align with international AI safety norms, and technological collaboration on AI safety. Moving forward, they plan to refine the Consortium's detailed research topics and operational strategies. The member organizations also presented their expertise in AI safety research and outlined their plans for Consortium activities, affirming their strong commitment to active collaboration with the AISI. < Participating Organizations in the "AI Safety Consortium" > Minister Yoo Sang-im of the MSIT emphasized, "AI safety is a prerequisite for sustainable AI development and one of the greatest challenges that all of us in the AI field must tackle together." He noted, "In the short span of just one year since the AI Safety Summit in November 2023 and the AI Seoul Summit in May 2024, major countries such as the United States, the United Kingdom, Japan, Singapore, and Canada have established AI safety institutes, creating an unprecedentedly swift and systematic framework for international AI safety cooperation." Minister Yoo further emphasized, "By bringing together the research capabilities of industry, academia, and research institutes through the AISI, we will rapidly secure the technological and policy expertise needed to take a leading role in the global AI safety alliance. We will actively support the AISI's growth into a research hub representing the Asia-Pacific region in AI safety." ETRI is a non-profit government-funded research institute. Since its foundation in 1976, ETRI, a global ICT research institute, has been making its immense effort to provide Korea a remarkable growth in the field of ICT industry. ETRI delivers Korea as one of the top ICT nations in the World, by unceasingly developing world's first and best technologies.
Share
Copy Link
Korea has officially launched its AI Safety Institute to address risks associated with artificial intelligence, promote safe AI development, and participate in global collaborations for AI safety.
Korea has officially launched its Artificial Intelligence Safety Institute (AISI) in Pangyo, just south of Seoul, marking a significant step in the country's commitment to addressing AI-related risks and fostering global collaboration in AI safety 12. The institute's establishment follows the AI Seoul Summit held in May, where leaders from ten countries, including Korea and Britain, adopted a joint declaration on promoting safe, innovative, and inclusive AI 12.
The AISI, operating under the Ministry of Science and ICT, aims to spearhead research on various AI risks, including technological limitations, human misuse, and potential loss of control over AI 3. Kim Myuhng-joo, an information security professor at Seoul Women's University, has been appointed as the inaugural chief of the institute 12.
Kim emphasized that the AISI will primarily support local AI companies in minimizing risk factors that could hinder their global competitiveness, rather than functioning as a regulatory body 123. This approach aligns with the institute's goal of fostering innovation while ensuring safety in AI development.
A key feature of the AISI's launch was the formation of the "Korea AI Safety Consortium," comprising 24 leading organizations from industry, academia, and research sectors 123. This consortium, which includes prominent tech firms like Naver, KT, and Kakao, as well as top universities such as Seoul National University and Korea Advanced Institute of Science and Technology, signed a memorandum of understanding to collaborate on AI safety research, policymaking, and evaluation 12.
The AISI is set to play a crucial role in international AI safety efforts. As a member of the International Network of AI Safety Institutes, which includes ten countries, Korea aims to actively participate in global discussions and collaborations on AI safety 123. Kim revealed that discussions have already taken place with European AI safety officials regarding potential regulatory exemptions for Korean firms entering the European market 12.
The institute plans to focus on evaluating potential risks arising from AI utilization, developing policies and technologies to prevent and minimize these risks, and strengthening both domestic and international collaborations 3. Additionally, Korea has decided to join Japan and Singapore in researching the phenomenon of AI providing different answers to the same questions under varying linguistic and cultural contexts 12.
The launch of the AISI has garnered international attention and support. Professor Yoshua Bengio, a renowned AI scholar and Global Advisor to the National AI Committee, congratulated the Korean government on this initiative, emphasizing the institute's critical roles in advancing risk assessment methodologies, supporting AI safety requirements, and fostering international cooperation 3.
As AI continues to evolve rapidly, the establishment of the AISI represents Korea's proactive approach to ensuring the responsible development and deployment of AI technologies while positioning itself as a key player in the global AI safety landscape.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
9 Sources
Technology
3 hrs ago
9 Sources
Technology
3 hrs ago
As nations compete for dominance in space, the risk of satellite hijacking and space-based weapons escalates, transforming outer space into a potential battlefield with far-reaching consequences for global security and economy.
7 Sources
Technology
19 hrs ago
7 Sources
Technology
19 hrs ago
OpenAI updates GPT-5 to make it more approachable following user feedback, sparking debate about AI personality and user preferences.
6 Sources
Technology
11 hrs ago
6 Sources
Technology
11 hrs ago
A pro-Russian propaganda group, Storm-1679, is using AI-generated content and impersonating legitimate news outlets to spread disinformation, raising concerns about the growing threat of AI-powered fake news.
2 Sources
Technology
19 hrs ago
2 Sources
Technology
19 hrs ago
A study reveals patients' increasing reliance on AI for medical advice, often trusting it over doctors. This trend is reshaping doctor-patient dynamics and raising concerns about AI's limitations in healthcare.
3 Sources
Health
11 hrs ago
3 Sources
Health
11 hrs ago