Curated by THEOUTPOST
On Tue, 4 Feb, 8:06 AM UTC
3 Sources
[1]
AI regulation around the world
Countries and economic blocs around the world are at different stages of regulating artificial intelligence, from a relative "Wild West" in the United States to highly complex rules in the European Union. Here are some key points about regulation in major jurisdictions, ahead of the Paris AI summit on February 10-11: United States Returning President Donald Trump last month rescinded Joe Biden's October 2023 executive order on AI oversight. Largely voluntary, it required major AI developers like OpenAI to share safety assessments and vital information with the federal government. Backed by major tech companies, it was aimed at protecting privacy and preventing civil rights violations, and called for safeguards on national security. Home to top developers, the United States now has no formal AI guidelines -- although some existing privacy protections do still apply. Under Trump, the United States has "picked up their cowboy hat again, it's a complete Wild West", said Yael Cohen-Hadria, a digital lawyer at consultancy EY. The administration has effectively said that "we're not doing this law anymore... we're setting all our algorithms running and going for it", she added. China China's government is still developing a formal law on generative AI. A set of "Interim Measures" requires that AI respects personal and business interests, does not use personal information without consent, signposts AI-generated images and videos, and protects users' physical and mental health. AI must also "adhere to core socialist values" -- effectively banning AI language models from criticizing the ruling Communist Party or undermining China's national security. DeepSeek, whose frugal yet powerful R1 model shocked the world last month, is an example, resisting questions about President Xi Jinping or the 1989 crushing of pro-democracy demonstrations in Tiananmen Square. While regulating businesses closely, especially foreign-owned ones, China's government will grant itself "strong exceptions" to its own rules, Cohen-Hadria predicted. European Union In contrast to both the United States and China, "the ethical philosophy of respecting citizens is at the heart of European regulation", Cohen-Hadria said. "Everyone has their share of responsibility: the provider, whoever deploys (AI), even the final consumer." The "AI Act" passed in March 2024 -- some of whose provisions apply from this week -- is the most comprehensive regulation in the world. Using AI for predictive policing based on profiling and systems that use biometric information to infer an individual's race, religion or sexual orientation are banned. The law takes a risk-based approach: if a system is high-risk, a company has a stricter set of obligations to fulfill. EU leaders have argued that clear, comprehensive rules will make life easier for businesses. Cohen-Hadria pointed to strong protections for intellectual property and efforts to allow data to circulate more freely while granting citizens control. "If I can access a lot of data easily, I can create better things faster," she said. India Like China, India -- co-host of next week's summit -- has a law on personal data but no specific text governing AI. Cases of harm originating from generative AI have been tackled with existing legislation on defamation, privacy, copyright infringement and cybercrime. New Delhi knows the value of its high-tech sector and "if they make a law, it will be because it has some economic return", Cohen-Hadria said. Occasional media reports and government statements about AI regulation have yet to be followed up with concrete action. Top AI firms including Perplexity blasted the government in March 2024 when the IT ministry issued an "advisory" saying firms would require government permission before deploying "unreliable" or "under-testing" AI models. It came days after Google's Gemini in some responses accused Prime Minister Narendra Modi of implementing fascist policies. Hastily-updated rules called only for disclaimers on AI-generated content. Britain Britain's centre-left Labour government has included AI in its agenda to boost economic growth. The island nation boasts the world's third-largest AI sector after the United States and China. Prime Minister Keir Starmer in January unveiled an "AI opportunities action plan" that called for London to chart its own path. AI should be "tested" before it is regulated, Starmer said. "Well-designed and implemented regulation... can fuel fast, wide and safe development and adoption of AI," the action plan document read. By contrast, "ineffective regulation could hold back adoption in crucial sectors", it added. A consultation is under way to clarify copyright law's application to AI, aiming to protect the creative industry. International efforts The Global Partnership on Artificial Intelligence (GPAI) brings together more than 40 countries, aiming to encourage responsible use of the technology. Members will meet on Sunday "in a broader format" to lay out an "action plan for 2025", the French presidency has said. The Council of Europe in May last year adopted the first-ever binding international treaty governing the use of AI, with the US, Britain and European Union joining the signatories. Of 193 UN member countries, just seven belong to seven major AI governance initiatives, while 119 belong to none -- mostly in the Global South.
[2]
AI regulation around the world
Here are some key highlights regarding regulation in major jurisdictions, in anticipation of the Paris AI summit on February 10-11.Countries and economic blocs around the world are at different stages of regulating artificial intelligence, from a relative "Wild West" in the United States to highly complex rules in the European Union. Here are some key points about regulation in major jurisdictions, ahead of the Paris AI summit on February 10-11. United States Returning President Donald Trump last month rescinded Joe Biden's October 2023 executive order on AI oversight. Largely voluntary, it required major AI developers like OpenAI to share safety assessments and vital information with the federal government. Backed by major tech companies, it aimed at protecting privacy and preventing civil rights violations, and called for safeguards on national security. Home to top developers, the United States now has no formal AI guidelines -- although some existing privacy protections do still apply. Under Trump, the United States has "picked up their cowboy hat again, it's a complete Wild West," said Yael Cohen-Hadria, a digital lawyer at consultancy EY. The administration has effectively said that "we're not doing this law anymore... we're setting all our algorithms running and going for it," she added. China China's government is still developing a formal law on generative AI. A set of "Interim Measures" requires that AI respects personal and business interests, does not use personal information without consent, signposts AI-generated images and videos, and protects users' physical and mental health. AI must also "adhere to core socialist values" -- with strong language against AI generating content threatening the ruling Communist Party or China's national security. DeepSeek, whose frugal yet powerful R1 model shocked the world last month, is an example, resisting questions about President Xi Jinping or the 1989 crushing of pro-democracy demonstrations in Tiananmen Square. While regulating business closely, especially foreigners, China's government will grant itself "strong exceptions" to its own rules, Cohen-Hadria predicted. European Union In contrast to both the United States and China, "the ethical philosophy of respecting citizens is at the heart of European regulation", Cohen-Hadria said. "Everyone has their share of responsibility, the provider, whoever deploys (AI), even the final consumer." Most directly concerning AI, the "AI Act" passed in March 2024 -- some of whose provisions apply from this week -- is the most comprehensive regulation in the world. Using AI for predictive policing based on profiling and systems that use biometric information to infer an individual's race, religion or sexual orientation are banned. The law takes a risk-based approach: if a system is high-risk, a company has a stricter set of obligations to fulfil. EU leaders have argued that clear, comprehensive rules will make life easier for businesses. Cohen-Hadria pointed to strong protections for intellectual property and efforts to allow data to circulate more freely while granting citizens control. "If I can access a lot of data easily, I can create better things faster," she said. India Like China, India -- co-host of next week's summit -- has a law on personal data but no specific text governing AI. Cases of harm originating from generative AI have been tackled with existing legislation on defamation, privacy, copyright infringement and cybercrime. New Delhi knows the value of its high-tech sector and "if they make a law, it will be because it has some economic return", Cohen-Hadria said. Occasional media reports and government statements about AI regulation have yet to be followed up with concrete action. Top AI firms including Perplexity blasted the government in March 2024 when the IT ministry issued an "advisory" saying firms would require government permission before deploying "unreliable" or "under-testing" AI models. It came days after Google's Gemini in some responses accused Prime Minister Narendra Modi of implementing fascist policies. Hastily-updated rules called only for disclaimers on AI-generated content. Britain Britain's centre-left Labour government has included AI in its agenda to boost economic growth. The island nation boasts the world's third-largest AI sector after the United States and China. Prime Minister Keir Starmer in January unveiled an "AI opportunities action plan" that called for London to chart its own path. AI should be "tested" before it is regulated, Starmer said. "Well-designed and implemented regulation... can fuel fast, wide and safe development and adoption of AI," the action plan document read. By contrast, "ineffective regulation could hold back adoption in crucial sectors", it added. A consultation is under way to clarify copyright law's application to AI, aiming to protect the creative industry.
[3]
AI regulation around the world
PARIS (AFP) - Countries and economic blocs around the world are at different stages of regulating artificial intelligence, from a relative "Wild West" in the United States to highly complex rules in the European Union. Here are some key points about regulation in major jurisdictions, ahead of the Paris AI summit on February 10-11. United States Returning President Donald Trump last month rescinded Joe Biden's October 2023 executive order on AI oversight. Largely voluntary, it required major AI developers like OpenAI to share safety assessments and vital information with the federal government. Backed by major tech companies, it aimed at protecting privacy and preventing civil rights violations, and called for safeguards on national security. Home to top developers, the United States now has no formal AI guidelines -- although some existing privacy protections do still apply. Under Trump, the United States has "picked up their cowboy hat again, it's a complete Wild West," said Yael Cohen-Hadria, a digital lawyer at consultancy EY. The administration has effectively said that "we're not doing this law anymore... we're setting all our algorithms running and going for it," she added. China's government is still developing a formal law on generative AI. A set of "Interim Measures" requires that AI respects personal and business interests, does not use personal information without consent, signposts AI-generated images and videos, and protects users' physical and mental health. AI must also "adhere to core socialist values" -- with strong language against AI generating content threatening the ruling Communist Party or China's national security. DeepSeek, whose frugal yet powerful R1 model shocked the world last month, is an example, resisting questions about President Xi Jinping or the 1989 crushing of pro-democracy demonstrations in Tiananmen Square. While regulating business closely, especially foreigners, China's government will grant itself "strong exceptions" to its own rules, Cohen-Hadria predicted.
Share
Share
Copy Link
As the Paris AI summit approaches, countries worldwide are at various stages of regulating artificial intelligence, from the US's "Wild West" approach to the EU's comprehensive rules.
As the world gears up for the Paris AI summit on February 10-11, 2025, the landscape of artificial intelligence regulation varies dramatically across major jurisdictions. From a relatively unregulated environment in the United States to highly complex rules in the European Union, countries are taking divergent approaches to managing the rapidly evolving AI sector 123.
In a significant policy shift, returning President Donald Trump rescinded Joe Biden's October 2023 executive order on AI oversight. This move has effectively removed formal AI guidelines in the US, although some existing privacy protections still apply 12.
Yael Cohen-Hadria, a digital lawyer at consultancy EY, describes the current US approach as "a complete Wild West," with the administration essentially saying, "we're not doing this law anymore... we're setting all our algorithms running and going for it" 123.
In stark contrast to the US approach, the European Union has implemented the most comprehensive AI regulation globally. The "AI Act," passed in March 2024, places ethical considerations and citizen respect at its core 12.
Key features of the EU regulation include:
China is still developing formal legislation on generative AI but has implemented "Interim Measures" that require AI to respect personal and business interests, protect user privacy, and adhere to "core socialist values" 123.
The measures also prohibit AI from generating content that threatens the ruling Communist Party or China's national security. An example of this is the DeepSeek R1 model, which resists questions about sensitive political topics 123.
As co-host of the upcoming Paris summit, India is navigating its regulatory approach. While it has laws on personal data, there's no specific legislation governing AI. The country has been addressing AI-related issues through existing laws on defamation, privacy, and cybercrime 12.
Recent controversies, such as the government's advisory on "unreliable" AI models and Google's Gemini responses about Prime Minister Modi, have highlighted the need for more concrete regulations 12.
The UK, boasting the world's third-largest AI sector, is taking a growth-oriented approach. Prime Minister Keir Starmer's "AI opportunities action plan" emphasizes testing AI before regulation and aims to create an environment that fuels "fast, wide and safe development and adoption of AI" 123.
The Global Partnership on Artificial Intelligence (GPAI), comprising over 40 countries, is working towards encouraging responsible AI use. However, of 193 UN member countries, only seven belong to major AI governance initiatives, while 119 – mostly in the Global South – are not part of any such initiative 1.
As the Paris AI summit approaches, these divergent regulatory approaches highlight the complex challenge of balancing innovation, economic growth, and ethical considerations in the rapidly evolving field of artificial intelligence.
Reference
[1]
[2]
[3]
The Paris AI Action Summit brings together world leaders and tech executives to discuss AI's future, with debates over regulation, safety, and economic benefits taking center stage.
47 Sources
47 Sources
The Paris AI Action Summit concluded with a declaration signed by 60 countries, but the US and UK's refusal to sign highlights growing divisions in global AI governance approaches.
18 Sources
18 Sources
As world leaders gather in Paris for an AI summit, experts emphasize the need for greater regulation to prevent AI from escaping human control. The summit aims to address both risks and opportunities associated with AI development.
2 Sources
2 Sources
The Paris AI summit marks a significant moment in global AI policy, with many nations pushing for regulation and sustainability despite U.S. resistance. This event highlights growing international consensus on AI governance.
2 Sources
2 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved