2 Sources
[1]
US Homeland Security attacks EU effort to police artificial intelligence
The outgoing head of the US Department of Homeland Security believes Europe's "adversarial" relationship with tech companies is hampering a global approach to regulation that could result in security vulnerabilities. Alejandro Mayorkas told the Financial Times the US -- home of the world's top artificial intelligence groups, including OpenAI and Google -- and Europe are not on a "strong footing" because of a difference in regulatory approach. He stressed the need for "harmonisation across the Atlantic", expressing concern that relationships between governments and the tech industry are "more adversarial" in Europe than in the US. "Disparate governance of a single item creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective," Mayorkas said, adding companies would also struggle to navigate different regulations across jurisdictions. The warning comes after the EU brought into force its AI Act this year, considered the strictest laws for governing the nascent technology anywhere in the world. It introduces restrictions on "high risk" AI systems and rules designed to create more transparency on how AI groups use data. The UK government also plans to introduce legislation that would compel AI companies to give access to their models for safety assessments. In the US, president-elect Donald Trump has vowed to cancel his predecessor, Joe Biden's executive order on AI, which set up a safety institute to conduct voluntary tests on models. Mayorkas said he did not know if the US safety institute "would stay" under the new administration, but warned prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving sector. Mayorkas's comments highlight fractures between European and American approaches to AI oversight as policymakers try to balance innovation with safety concerns. The DHS is tasked with protecting the security and safety of the US, against threats such as terrorism and cyber security. That responsibility will fall to Kristi Noem, the South Dakota governor Trump chose to run the department. The president-elect has also named venture capitalist David Sacks, a critic of tech regulation, as his AI and crypto tsar. In the US, efforts to regulate the technology have been thwarted by fears it could stifle innovation. In September, California governor Gavin Newsom vetoed an AI safety bill that would have governed the technology within the state, citing such concerns. The Biden administration's early approach to AI regulation has been accused of being both too heavy handed, and of not going far enough. Silicon Valley venture capitalist Marc Andreessen said during a podcast interview this week that he was "very scared" about government officials' plans for AI policy after meetings with Biden's team this summer. He described the officials as "out for blood". Republican senator Ted Cruz has also recently warned against "heavy-handed" foreign regulatory influence from policymakers in Europe and the UK over the sector. Mayorkas said: "I worry about a rush to legislate at the expense of innovation and inventiveness because lord knows our regulatory apparatus and our legislative apparatus is not nimble." He defended his department's preference for "descriptive" rather than "prescriptive" guidelines. "The mandatory structure is perilous in a rapidly evolving world." The DHS has been actively incorporating AI into its operations, aiming to demonstrate government agencies can implement new technologies while ensuring safe and secure deployment. It has deployed generative AI models to train refugee officers and role-play interviews. This week, it launched an internal DHS AI chatbot powered by OpenAI through Microsoft's Azure cloud computing platform. In his tenure, Mayorkas drew up a framework for safe and secure deployment of AI in critical infrastructure, making recommendations for cloud and compute providers, AI developers, infrastructure owners and operators on addressing risks. It included guarding the physical security of data centres, powering AI systems and monitoring activity, evaluating models for risks, bias and vulnerabilities, and protecting consumer data. "We have to work well with the private sector," he added. "They're a key stakeholder of our nation's critical infrastructure. The majority of it is actually owned and operated by the private sector. We need to execute a model of partnership and not one of adversity or tension."
[2]
Homeland Security Head Criticizes EU's 'Adversarial' AI Approach | PYMNTS.com
America's homeland security chief is reportedly worried about the European Union's approach to AI regulation. Alejandro Mayorkas, outgoing head of the U.S. Department of Homeland Security, said in an interview with the Financial Times (FT) Thursday (Dec. 19) that Europe's "adversarial" relationship with tech companies is hindering broader efforts to regulate artificial intelligence (AI). He said the U.S. and Europe are not on a "strong footing" because of a difference in how they approach regulation, and stressed a need for "harmonization" across the Atlantic. Mayorkas also expressed concern that relationships between governments and the tech companies in Europe are "more adversarial" than in the U.S. "Disparate governance of a single item creates a potential for disorder, and disorder creates a vulnerability from a safety and security perspective," he said, adding companies would also have trouble navigating regulations across jurisdictions. This year saw the European Union (EU) introduce its AI Act, which, as the FT notes, is considered the toughest artificial intelligence regulation in the world. The law features restrictions on "high risk" AI systems and regulations aimed at creating more transparency on how AI companies use data. The U.K. government also plans to introduce legislation that would compel AI companies to give access to their models for safety assessments. The British government this week also launched a consultation to clarify copyright laws concerning the use of protected materials in training AI models. The aim is to address issues such as enhancing transparency between creators and AI developers, developing licensing frameworks for the use of copyrighted materials and ensuring wide access to high-quality data for AI innovation. "These proposals will help unlock the full potential of the AI sector and creative industries to drive innovation, investment, and prosperity across the country," the government said in a news release. Meanwhile, incoming U.S. President Donald Trump has pledged to revoke predecessor Joe Biden's AI executive order, which established a safety institute to voluntarily test AI models. Trump has also picked David Sacks, a venture capitalist and noted critic of tech regulation, to serve as his AI and crypto czar. Mayorkas told the FT he did not know if the U.S. safety institute "would stay" under the Trump administration, but warned prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving AI industry.
Share
Copy Link
Alejandro Mayorkas, outgoing head of the US Department of Homeland Security, warns that Europe's "adversarial" relationship with tech companies could hinder global AI regulation efforts and create security vulnerabilities.
Alejandro Mayorkas, the outgoing head of the US Department of Homeland Security, has expressed concerns about the European Union's approach to artificial intelligence (AI) regulation. In a recent interview with the Financial Times, Mayorkas criticized Europe's "adversarial" relationship with tech companies, warning that it could impede global efforts to regulate AI and potentially create security vulnerabilities 1.
Mayorkas emphasized the need for "harmonisation across the Atlantic," highlighting that the US and Europe are not on a "strong footing" due to their differing regulatory approaches. He expressed particular concern about the more adversarial relationships between European governments and tech companies compared to those in the US 2.
The EU recently introduced its AI Act, widely considered the strictest AI regulation globally. The act imposes restrictions on "high risk" AI systems and mandates greater transparency in data usage by AI companies 1. Similarly, the UK government plans to introduce legislation requiring AI companies to provide access to their models for safety assessments.
In contrast, the US has taken a more cautious approach to AI regulation. President Joe Biden's executive order established a safety institute for voluntary AI model testing. However, president-elect Donald Trump has vowed to cancel this order and has appointed David Sacks, a critic of tech regulation, as his AI and crypto tsar 1.
Mayorkas warned that prescriptive laws could "suffocate and harm US leadership" in the rapidly evolving AI sector. He advocated for a "descriptive" rather than "prescriptive" approach to guidelines, arguing that a mandatory structure could be "perilous in a rapidly evolving world" 1.
The Department of Homeland Security has been actively incorporating AI into its operations, aiming to demonstrate how government agencies can implement new technologies while ensuring safe and secure deployment. Recent initiatives include using generative AI models to train refugee officers and launching an internal AI chatbot 1.
The regulatory debate has sparked diverse reactions from the tech industry. Silicon Valley venture capitalist Marc Andreessen expressed fear about government officials' plans for AI policy, describing them as "out for blood." Meanwhile, Republican senator Ted Cruz has warned against "heavy-handed" foreign regulatory influence from Europe and the UK 1.
Mayorkas stressed the importance of working well with the private sector, noting that the majority of the nation's critical infrastructure is owned and operated by private companies. He emphasized the need for a partnership model rather than one of adversity or tension 1.
As the global AI landscape continues to evolve, the challenge of balancing innovation with safety and security remains at the forefront of international policy discussions. The contrasting approaches of the US and EU highlight the complexity of achieving a harmonized global framework for AI regulation.
Summarized by
Navi
[1]
Meta has recruited Ruoming Pang, Apple's head of AI models, in a significant move that highlights the intensifying competition for AI talent among tech giants.
13 Sources
Technology
15 hrs ago
13 Sources
Technology
15 hrs ago
Elon Musk's xAI company faces criticism after its Grok chatbot exhibits troubling behavior, including generating antisemitic content and inconsistent responses, following a recent update.
4 Sources
Technology
23 hrs ago
4 Sources
Technology
23 hrs ago
An AI-generated indie rock band, The Velvet Sundown, gains viral attention and sparks controversy about the future of music creation and authenticity in the age of artificial intelligence.
3 Sources
Technology
7 hrs ago
3 Sources
Technology
7 hrs ago
Samsung announces new security and privacy updates for upcoming Galaxy smartphones with One UI 8, focusing on protecting AI-powered features, expanding threat detection, and introducing quantum-resistant encryption.
3 Sources
Technology
22 hrs ago
3 Sources
Technology
22 hrs ago
Ukrainian military official reports Russia's use of a new AI-powered drone, the Shahed MS001, equipped with Nvidia's Jetson Orin. This autonomous weapon can identify and strike targets without external commands, raising concerns about AI in warfare and the effectiveness of tech sanctions.
2 Sources
Technology
22 hrs ago
2 Sources
Technology
22 hrs ago