Curated by THEOUTPOST
On Fri, 20 Dec, 12:01 AM UTC
2 Sources
[1]
US Homeland Security attacks EU effort to police artificial intelligence
The outgoing head of the US Department of Homeland Security believes Europe's "adversarial" relationship with tech companies is hampering a global approach to regulation that could result in security vulnerabilities. Alejandro Mayorkas told the Financial Times the US -- home of the world's top artificial intelligence groups, including OpenAI and Google -- and Europe are not on a "strong footing" because of a difference in regulatory approach. He stressed the need for "harmonisation across the Atlantic", expressing concern that relationships between governments and the tech industry are "more adversarial" in Europe than in the US. "Disparate governance of a single item creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective," Mayorkas said, adding companies would also struggle to navigate different regulations across jurisdictions. The warning comes after the EU brought into force its AI Act this year, considered the strictest laws for governing the nascent technology anywhere in the world. It introduces restrictions on "high risk" AI systems and rules designed to create more transparency on how AI groups use data. The UK government also plans to introduce legislation that would compel AI companies to give access to their models for safety assessments. In the US, president-elect Donald Trump has vowed to cancel his predecessor, Joe Biden's executive order on AI, which set up a safety institute to conduct voluntary tests on models. Mayorkas said he did not know if the US safety institute "would stay" under the new administration, but warned prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving sector. Mayorkas's comments highlight fractures between European and American approaches to AI oversight as policymakers try to balance innovation with safety concerns. The DHS is tasked with protecting the security and safety of the US, against threats such as terrorism and cyber security. That responsibility will fall to Kristi Noem, the South Dakota governor Trump chose to run the department. The president-elect has also named venture capitalist David Sacks, a critic of tech regulation, as his AI and crypto tsar. In the US, efforts to regulate the technology have been thwarted by fears it could stifle innovation. In September, California governor Gavin Newsom vetoed an AI safety bill that would have governed the technology within the state, citing such concerns. The Biden administration's early approach to AI regulation has been accused of being both too heavy handed, and of not going far enough. Silicon Valley venture capitalist Marc Andreessen said during a podcast interview this week that he was "very scared" about government officials' plans for AI policy after meetings with Biden's team this summer. He described the officials as "out for blood". Republican senator Ted Cruz has also recently warned against "heavy-handed" foreign regulatory influence from policymakers in Europe and the UK over the sector. Mayorkas said: "I worry about a rush to legislate at the expense of innovation and inventiveness because lord knows our regulatory apparatus and our legislative apparatus is not nimble." He defended his department's preference for "descriptive" rather than "prescriptive" guidelines. "The mandatory structure is perilous in a rapidly evolving world." The DHS has been actively incorporating AI into its operations, aiming to demonstrate government agencies can implement new technologies while ensuring safe and secure deployment. It has deployed generative AI models to train refugee officers and role-play interviews. This week, it launched an internal DHS AI chatbot powered by OpenAI through Microsoft's Azure cloud computing platform. In his tenure, Mayorkas drew up a framework for safe and secure deployment of AI in critical infrastructure, making recommendations for cloud and compute providers, AI developers, infrastructure owners and operators on addressing risks. It included guarding the physical security of data centres, powering AI systems and monitoring activity, evaluating models for risks, bias and vulnerabilities, and protecting consumer data. "We have to work well with the private sector," he added. "They're a key stakeholder of our nation's critical infrastructure. The majority of it is actually owned and operated by the private sector. We need to execute a model of partnership and not one of adversity or tension."
[2]
Homeland Security Head Criticizes EU's 'Adversarial' AI Approach | PYMNTS.com
America's homeland security chief is reportedly worried about the European Union's approach to AI regulation. Alejandro Mayorkas, outgoing head of the U.S. Department of Homeland Security, said in an interview with the Financial Times (FT) Thursday (Dec. 19) that Europe's "adversarial" relationship with tech companies is hindering broader efforts to regulate artificial intelligence (AI). He said the U.S. and Europe are not on a "strong footing" because of a difference in how they approach regulation, and stressed a need for "harmonization" across the Atlantic. Mayorkas also expressed concern that relationships between governments and the tech companies in Europe are "more adversarial" than in the U.S. "Disparate governance of a single item creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective," he said, adding companies would also have trouble navigating regulations across jurisdictions. This year saw the European Union (EU) introduce its AI Act, which, as the FT notes, is considered the toughest artificial intelligence regulation in the world. The law features restrictions on "high risk" AI systems and regulations aimed at creating more transparency on how AI companies use data. The U.K. government also plans to introduce legislation that would compel AI companies to give access to their models for safety assessments. The British government this week also launched a consultation to clarify copyright laws concerning the use of protected materials in training AI models. The aim is to address issues such as enhancing transparency between creators and AI developers, developing licensing frameworks for the use of copyrighted materials and ensuring wide access to high-quality data for AI innovation. "These proposals will help unlock the full potential of the AI sector and creative industries to drive innovation, investment, and prosperity across the country," the government said in a news release. Meanwhile, incoming U.S. President Donald Trump has pledged to revoke predecessor Joe Biden's AI executive order, which established a safety institute to voluntarily test AI models. Trump has also picked David Sacks, a venture capitalist and noted critic of tech regulation, to serve as his AI and crypto czar. Mayorkas told the FT he did not know if the U.S. safety institute "would stay" under the Trump administration, but warned prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving AI industry.
Share
Share
Copy Link
Alejandro Mayorkas, outgoing head of the US Department of Homeland Security, warns that Europe's "adversarial" relationship with tech companies could hinder global AI regulation efforts and create security vulnerabilities.
Alejandro Mayorkas, the outgoing head of the US Department of Homeland Security, has expressed concerns about the European Union's approach to artificial intelligence (AI) regulation. In a recent interview with the Financial Times, Mayorkas criticized Europe's "adversarial" relationship with tech companies, warning that it could impede global efforts to regulate AI and potentially create security vulnerabilities 1.
Mayorkas emphasized the need for "harmonisation across the Atlantic," highlighting that the US and Europe are not on a "strong footing" due to their differing regulatory approaches. He expressed particular concern about the more adversarial relationships between European governments and tech companies compared to those in the US 2.
The EU recently introduced its AI Act, widely considered the strictest AI regulation globally. The act imposes restrictions on "high risk" AI systems and mandates greater transparency in data usage by AI companies 1. Similarly, the UK government plans to introduce legislation requiring AI companies to provide access to their models for safety assessments.
In contrast, the US has taken a more cautious approach to AI regulation. President Joe Biden's executive order established a safety institute for voluntary AI model testing. However, president-elect Donald Trump has vowed to cancel this order and has appointed David Sacks, a critic of tech regulation, as his AI and crypto tsar 1.
Mayorkas warned that prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving AI sector. He advocated for a "descriptive" rather than "prescriptive" approach to guidelines, arguing that a mandatory structure could be "perilous in a rapidly evolving world" 1.
The Department of Homeland Security has been actively incorporating AI into its operations, aiming to demonstrate how government agencies can implement new technologies while ensuring safe and secure deployment. Recent initiatives include using generative AI models to train refugee officers and launching an internal AI chatbot 1.
The regulatory debate has sparked diverse reactions from the tech industry. Silicon Valley venture capitalist Marc Andreessen expressed fear about government officials' plans for AI policy, describing them as "out for blood." Meanwhile, Republican senator Ted Cruz has warned against "heavy-handed" foreign regulatory influence from Europe and the UK 1.
Mayorkas stressed the importance of working well with the private sector, noting that the majority of the nation's critical infrastructure is owned and operated by private companies. He emphasized the need for a partnership model rather than one of adversity or tension 1.
As the global AI landscape continues to evolve, the challenge of balancing innovation with safety and security remains at the forefront of international policy discussions. The contrasting approaches of the US and EU highlight the complexity of achieving a harmonized global framework for AI regulation.
Reference
[1]
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
The DHS has introduced a comprehensive framework to guide the responsible and secure implementation of AI across critical infrastructure sectors, addressing potential risks and promoting widespread adoption.
8 Sources
8 Sources
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
President Biden signs an executive order for AI data centers and introduces new regulations on AI chip exports, sparking industry debate and raising questions about the future of AI development globally.
78 Sources
78 Sources
President Biden's new directive aims to maintain U.S. leadership in AI while addressing national security concerns and ethical considerations, setting deadlines for federal agencies to implement AI technologies responsibly.
24 Sources
24 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved