Curated by THEOUTPOST
On Thu, 6 Feb, 4:03 PM UTC
4 Sources
[1]
US sets AI safety aside in favor of 'AI dominance'
Large-scale shifts at US government agencies that monitor AI development are underway this week. Where does that leave AI regulation? In October 2023, former president Joe Biden signed an executive order that included several measures for regulating AI. On his first day in office, President Trump overturned it, replacing it a few days later with his own order on AI in the US. This week, some government agencies that enforce AI regulation were told to halt their work, while the director of the US AI Safety Institute (AISI) stepped down. Also: ChatGPT's Deep Research just identified 20 jobs it will replace. Is yours on the list? So what does this mean practically for the future of AI regulation? Here's what you need to know. In addition to naming several initiatives around protecting civil rights, jobs, and privacy as AI accelerates, Biden's order focused on responsible development and compliance. However, as ZDNET's Tiernan Ray wrote at the time, the order could have been more specific, leaving loopholes available in much of the guidance. Though it required companies to report on any safety testing efforts, it didn't make red-teaming itself a requirement, or clarify any standards for testing. Ray pointed out that because AI as a discipline is very broad, regulating it needs -- but is also hampered by -- specificity. A Brookings report noted in November that because federal agencies absorbed many of the directives in Biden's order, they may protect them from Trump's repeal. But that protection is looking less and less likely. Also: Why rebooting your phone daily is your best defense against zero-click hackers Biden's order established the US AI Safety Institute (AISI), which is part of the National Institute of Standards and Technology (NIST). The AISI conducted AI model testing and worked with developers to improve safety measures, among other regulatory initiatives. In August, AISI signed agreements with Anthropic and OpenAI to collaborate on safety testing and research; in November, it established a testing and national security task force. On Wednesday, likely due to Trump administration shifts, AISI director Elizabeth Kelly announced her departure from the institute via LinkedIn. The fate of both initiatives, and the institute itself, is now unclear. The Consumer Financial Protection Bureau (CFPB) also carried out many of the Biden order's objectives. For example, a June 2023 CFPB study on chatbots in consumer finance noted that they "may provide incorrect information, fail to provide meaningful dispute resolution, and raise privacy and security risks." CFPB guidance states lenders have to provide reasons for denying someone credit regardless of whether or not their use of AI makes this difficult or opaque. In June 2024, CFPB approved a new rule to ensure algorithmic home appraisals are fair, accurate, and comply with nondiscrimination law. This week, the Trump administration halted work at CFPB, signaling that it may be on the chopping block -- which would severely undermine the enforcement of these efforts. Also: How AI can help you manage your finances (and what to watch out for) CFPB is in charge of ensuring companies comply with anti-discrimination measures like the Equal Credit Opportunity Act and the Consumer Financial Protection Act, and has noted that AI adoption can exacerbate discrimination and bias. In an August 2024 comment, CFPB noted it was "focused on monitoring the market for consumer financial products and services to identify risks to consumers and ensure that companies using emerging technologies, including those marketed as 'artificial intelligence' or 'AI,' do not violate federal consumer financial protection laws." It also stated it was monitoring "the future of consumer finance" and "novel uses of consumer data." "Firms must comply with consumer financial protection laws when adopting emerging technology," the comment continues. It's unclear what body would enforce this if CFPB radically changes course or ceases to exist under new leadership. On January 23rd, President Trump signed his own executive order on AI. In terms of policy, the single-line directive says only that the US must "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security." Unlike Biden's order, terms like "safety," "consumer," "data," and "privacy" don't appear at all. There are no mentions of whether the Trump administration plans to prioritize safeguarding individual protections or address bias in the face of AI development. Instead, it focuses on removing what the White House called "unnecessarily burdensome requirements for companies developing and deploying AI," seemingly focusing on industry advancement. Also: If you're not working on quantum-safe encryption now, it's already too late The order goes on to direct officials to find and remove "inconsistencies" with it in government agencies -- that is to say, remnants of Biden's order that have been or are still being carried out. In March 2024, the Biden administration released an additional memo stating government agencies using AI would have to prove those tools weren't harmful to the public. Like other Biden-era executive orders and related directives, it emphasized responsible deployment, centering AI's impact on individual citizens. Trump's executive order notes that it will review (and likely dismantle) much of this memo by March 24th. This is especially concerning given that last week, OpenAI released ChatGPT Gov, a version of OpenAI's chatbot optimized for security and government systems. It's unclear when government agencies will get access to the chatbot or whether there will be parameters around how it can be used, though OpenAI says government workers already use ChatGPT. If the Biden memo -- which has since been removed from the White House website -- is gutted, it's hard to say whether ChatGPT Gov will be held to any similar standards that account for harm. Trump's executive order gave his staff 180 days to come up with an AI policy, meaning its deadline to materialize is July 22nd. On Wednesday, the Trump administration put out a call for public comment to inform that action plan. The Trump administration is disrupting AISI and CFPB -- two key bodies that carry out Biden's protections -- without a formal policy in place to catch fallout. That leaves AI oversight and compliance in a murky state for at least the next six months (millennia in AI development timelines, given the rate at which the technology evolves), all while tech giants become even more entrenched in government partnerships and initiatives like Project Stargate. Also: How AI will transform cybersecurity in 2025 - and supercharge cybercrime Considering global AI regulation is still far behind the rate of advancement, perhaps it was better to have something rather than nothing. "While Biden's AI executive order may have been mostly symbolic, its rollback signals the Trump administration's willingness to overlook the potential dangers of AI," said Peter Slattery, a researcher on MIT's FutureTech team who led its Risk Repository project. "This could prove to be shortsighted: a high-profile failure -- what we might call a 'Chernobyl moment' -- could spark a crisis of public confidence, slowing the progress that the administration hopes to accelerate." "We don't want advanced AI that is unsafe, untrustworthy, or unreliable -- no one is better off in that scenario," he added.
[2]
Why regulating AI is so hard -- and necessary
Once seen as a distant prospect, the powerful capabilities of artificial intelligence (AI) have rapidly become a reality. The advent of modern AI, which relies on advanced machine learning and deep learning techniques, has left governments scrambling to catch up and decide how to avoid a litany of threats to society, such as increasingly persuasive propaganda, cyber attacks on public infrastructure, and the capacity for unprecedented levels of surveillance by governments and companies. Faced with the need to mitigate AI risk, countries and regions are charting different paths. The European Union is leading the way with a comprehensive framework focused on protecting individual rights and ensuring accountability for AI's makers, while China prioritizes state control at the expense of personal freedoms. The United States is scrambling to catch up as it works on balancing innovation with the need to address safety and ethical concerns. These varied strategies underscore the challenge of regulating AI -- navigating competing priorities while addressing its far-reaching impact. On both national and international fronts, can we find common ground to manage AI responsibly and ensure it serves humanity's best interests? One expert at the forefront of that debate is political scientist Allison Stanger of Middlebury College. In addition to her professorship in Vermont, Stanger is an affiliate professor at the Berkman Klein Center for Internet & Society at Harvard University and the author of several books, including the forthcoming Who Elected Big Tech? In an article published in the 2024 Annual Review of Political Science, Stanger and coauthors explore the global landscape of AI governance, highlighting the challenges posed by AI and its potential threat to democratic systems. Knowable Magazine spoke with Stanger about how AI can be regulated nationally to serve democratic values and how we can establish a global framework that addresses common challenges. The interview has been edited for length and clarity. How would you characterize AI threats? There are two ways to look at it. You have what I like to think of as threats to democracy where artificial intelligence exacerbates existing issues, such as privacy concerns, market volatility and misinformation. And then there are existential threats to humanity, such as misaligned AI [AI that doesn't behave in alignment with intended human goals and values], drone warfare or the proliferation of chemical and biological weapons. A common argument is that humans have always been anxious about new tech, and AI is just the most recent development. How is AI different? I think that's a valid response for most of human history: Technology changes, then humanity adapts, and there's a new equilibrium. But what's different about this particular technological innovation is that its creators don't entirely understand it. If we're thinking about other technological breakthroughs, like the automobile, I might not know how to fix my car, but there's somebody who does. The thing about generative AI is that, while its creators understand neural networks and deep learning -- the algorithms that underpin modern AI -- they can't predict what a model is going to do. That means if something goes terribly wrong, they can't immediately know how to fix it. It's this knowledge element that takes us beyond ordinary human capacities to think and understand. In that sense, it's really like Alien Intelligence. How could AI make drone warfare and the proliferation of chemical and biological weapons worse? Existential threats to humanity don't necessarily mean killer robots: They can just mean AI systems that run amok, that do things they weren't designed to do, or you didn't foresee they could or would do. Existential threats will emerge if AI reaches a threshold where it's trusted to make choices without human intervention. Drones are a good example. You might think that, OK, fighter pilots just stay at home; we let the computers fight the computers, and everybody wins. But there's always collateral damage in this type of warfare, and the more autonomy these systems have, the greater the danger. And then there's the risk of AI being used to create biological or chemical weapons. The basic issue is how to prevent the technology being misused by bad actors. The same goes for cyber attacks, where just one ordinary hacker could leverage open-source AI models -- which means models that are publicly available and can be customized and run from a laptop -- to break into all kinds of systems. And how does AI exacerbate more imminent threats to democracy, such as misinformation and market volatility? Already without AI, the existing social media system is fundamentally incompatible with democracy. To discuss the best next political steps, you need a core sense of people believing the same things to be true, and that's been blown up by recommender algorithms spawning hateful viral transmissions, . AI just automates all those things and makes it easier to amplify and distort human speech. Automation is also what could bring greater volatility to financial markets, as we now have all these automated AI computer models for financial transactions where things happen rapidly without human intervention. AI also poses a very real threat to individual autonomy. The best way I can describe it is, if you've ever been billed for something incorrectly, it's almost impossible to get a human on the phone. Instead, you're going through all these bots asking you questions and going in circles, without being directly served. That's how I would characterize the real insidious threat from AI: If people increasingly rely upon it, we're all eventually going to be stuck in this Kafka-esque world that makes us feel super small and insignificant and as though we don't have basic human rights. How would you define AI governance? Governance is deciding how we're going to work together, on the municipal, state, federal and global level, to deal with this immense new technological innovation that's going to transform our society and politics. What legislation or other initiatives have the US implemented to protect against AI threats? The main initiative has been Joe Biden's executive order on AI, signed in 2023. The order, which instructs the federal government on what to prioritize and how to shape policy, focuses on ensuring AI is safe, secure and ethical by setting standards for testing, protecting privacy and addressing national security risks while also encouraging innovation and international collaboration. Essentially, it outlines guardrails that sustain democracy rather than undermine it. President Donald Trump has already overturned this order. The Biden administration also created the AI Safety Institute, which focuses on advancing AI safety science and practices, addressing risks to national security, public safety and individual rights. It's not clear what the fate of that institute is going to be under the Trump administration. What national laws do you see as the most important to rein in AI? We need to make it very clear that humans have rights but algorithms don't. The national discussion about free speech on online platforms is currently distorted and confused. The Supreme Court seems to have believed that social media platforms are just carriers of information; they're just transmitting things people post in some chronological way. However, the recent unanimous decision to uphold the TikTok ban suggests their understanding is becoming more accurate. Everything you see online has been mediated by an algorithm that's specifically geared to optimize for engagement, and it turns out that humans are most engaged when they are enraged. And we need to hold the company that designed that algorithm liable for any harm done. Corporations have free speech rights. But a corporation is a collection of humans. And that's different from a machine, which is an instrument of humans. Has the US made any progress in this direction? In the United States, we have actually introduced legislation to repeal Section 230 which, put in simplified terms, is a liability shield that says that platforms aren't publishers and therefore not responsible for anything that happens on them. There's no other company in the United States besides the technology companies that have this liability shield. By having that shield in place, the court hasn't had to deal with any of these issues and how they pertain to American constitutional democracy. If the proposed legislation passes, Section 230 will be sunsetted by the end of 2025, which will allow First Amendment jurisprudence to develop for our now virtual public square and make platforms liable like any other corporation. Beyond Biden's executive order, is there proposed AI legislation in the US? There's a lot of already-drafted legislation for AI safety. There's the Algorithmic Accountability Act, which requires companies to assess the impact of automated systems to ensure they do not create discriminatory or biased outcomes; there's the DEEPFAKES Accountability Act, which seeks to regulate the use of AI to create misleading or harmful deepfake content; and there's the Future of Artificial Intelligence Innovation Act, which encourages the study of AI's impact on the economy, workforce, and national security. We just have to work to make all this proposed legislation reality. But right now, we're not focusing enough on that. The US is home to the big technology companies, and what the United States does matters for the world. But AI wasn't a discussion during the election campaign. We're also not having the public discussion required for politicians do something about the total absence of guardrails. Europe has been a trailblazer in AI governance, and there's a lot we can learn from the EU. What type of regulation has the European Union put in place? There's the EU Artificial Intelligence Act, which classifies AI systems into risk levels (unacceptable, high, limited, minimal) and imposes stricter rules on higher-risk applications; the Digital Markets Act, which targets large online platforms to prevent monopolistic practices; the Digital Services Act, which requires platforms to remove illegal content, combat misinformation and provide greater transparency about algorithms and ads. Finally, there's the earlier GDPR -- the general data protection regulation -- which gives individuals more control over their personal data and imposes requirements on companies for data collection, processing and protection. A version of the GDPR was actually adopted by the state of California in 2018. How do you see us achieving global governance of AI? Should we have international treaties like we do for nuclear weapons? I think we should aspire to treaties, yes, but they're not going to be like the ones for nuclear weapons, because nuclear is a lot easier to regulate. Ordinary people don't have access to the components needed to build a nuclear wapeon, whereas with AI, so much is commercially available. What is the main difference in how China and the US regulates AI? China has a very clear ethics to its political system: It's a utilitarian one -- the greatest good for the greatest number. Liberal democracies are different. We protect individual rights and you can't trample on those for the good of the majority. The Chinese government has a tighter control over companies that are building AI systems there. For example in 2023 China passed its "Measures for the Management of Generative AI Services," which requires providers to ensure AI-generated content aligns with the government's core socialist values. Providers must prevent content that could undermine national unity or social stability and are responsible for the legality of their training data and generated outputs. As there's a symbiotic relationship between the companies and the state, government surveillance is not a problem: If a company gets your personal data, the Communist Party will get it as well. So China has great AI governance -- great AI safety -- but its citizens are not free. That's not a trade-off I think the free world should be willing to make. How does this difference between authoritarian and democratic systems affect international AI governance? What I've proposed is a dual-track approach, where we work together with our allies on keeping freedom and democracy alive while simultaneously working to reduce the risk of war with non-democracies. There are still things we can agree on with countries like China. For example, we could reach an agreement on no first use of cyber weapons on critical infrastructure. Now you might say, Oh, well, people will just do it anyway. But the way these agreements operate is that merely talking about and recognizing it as a problem creates channels of communication that can come in very handy in a crisis situation. Lastly, how do you think the political divide in America, where Republicans tend to support a hands-off approach to business, will affect regulation of AI? There are real believers in laissez-faire approaches to the market, and Republicans often see the government as a clumsy administrator of regulations. And there's some truth to that. But that begs the question of who is going to put guardrails in place, if not government. It's not going to be the companies -- that's not their job. It's the government's job to look out for the common good and ensure that companies aren't overstepping certain boundaries and harming people. Europeans understand that instinctively, but Americans sometimes don't -- even though they're often benefiting from government guardrails to ensure public safety. My hope is that we can turn them around without having a large-scale catastrophe teach them through experience.
[3]
AI Regulation in the U.S.: Navigating Post-EO 14110
As the Trump administration revokes Executive Order 14110, the U.S. shifts toward a market-driven AI strategy, departing from the Biden administration's regulatory framework. While proponents see this as a catalyst for innovation and economic growth, critics warn of increased risks, regulatory fragmentation, and strained transatlantic relations. With Europe reinforcing its AI Act and states like California exploring their own regulations, the future of AI governance in the U.S. remains uncertain. Will deregulation accelerate progress, or does it open the door to new challenges in ethics, security, and global cooperation? Just days after taking office, Donald Trump, the 47th President of the United States, issued a series of executive actions aimed at dismantling key initiatives from the Biden administration. Among them was the revocation of Executive Order (EO) 14110, a landmark policy that established a framework for AI governance and regulation. This decision marks a turning point in U.S. AI policy. For its supporters, it is a necessary reform; for its critics, it is a dangerous setback. While EO 14110 aimed to structure AI adoption by balancing innovation and oversight, its repeal raises critical questions about the future of AI in the United States and its global impact. Executive Order 14110 was issued on October 30, 2023, under the Biden administration. This major initiative aimed to regulate the development and deployment of artificial intelligence. Its goal was to balance innovation, security, and economic stability while ensuring that AI systems remained reliable, safe, and transparent. In the Biden administration's vision, EO 14110 was designed to address key concerns such as algorithmic bias, misinformation, job displacement, and cybersecurity vulnerabilities. It was not intended to impose direct restrictions on the private sector but rather to establish security and ethical standards, particularly for AI used by federal agencies and in public sector contracts, while also influencing broader AI governance. From an international perspective, EO 14110 also aimed to strengthen the United States' role in global AI governance. It aligned with the European Union's approach, particularly as the EU was developing its AI Act. The order was part of a broader transatlantic effort to establish ethical and security standards for AI. "Artificial Intelligence (AI) holds extraordinary potential for both promise and peril. Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security." (EO 14110 - Section 1) It is important to understand that EO 14110 was not an isolated initiative. It was part of a broader strategy built on several existing frameworks and commitments. It is worth noting that even after the revocation of EO 14110, these initiatives remain in place, ensuring a degree of continuity in AI governance in the United States. Executive Order 14110 pursued several strategic objectives aimed at regulating AI adoption while promoting innovation. It emphasized the security and reliability of AI systems by requiring robustness testing and risk assessments, particularly in sensitive areas such as cybersecurity and critical infrastructure. It also aimed to ensure fairness and combat bias by implementing protections against algorithmic discrimination and promoting ethical AI use in hiring, healthcare, and justice. EO 14110 included training, reskilling, and protection programs to help workers adapt to AI-driven changes. It also aimed to protect consumers by preventing fraudulent or harmful AI applications, ensuring safe and beneficial use. Finally, the executive order aimed to reinforce international cooperation, particularly with the European Union, to establish common AI governance standards. However, it's important to note that it did not aim to regulate the entire private sector but rather to set strict ethical and security standards for AI systems used by federal agencies. To quickly get the essentials, here are the eight fundamental principles it was built on: So, on January 20, 2025, the Trump administration announced the revocation of EO 14110, arguing that it restricted innovation by imposing excessive administrative constraints. The White House justified this decision as part of a broader push to deregulate the sector, boost the economy, and attract AI investment. The administration made clear its preference for a market-driven approach. According to Trump, private companies are better positioned to oversee AI development without federal intervention. Clearly, this shift marks a geopolitical turning point. The United States is moving away from a multilateral approach to assert its dominance in the AI sector. However, this revocation does not mean the end of AI regulation in the United States. Other federal initiatives, such as the NIST AI Risk Management Framework, remain in place. "Republicans support AI development rooted in free speech and human flourishing." (The 2024 Republican Party by Reuters) The repeal of EO 14110 has immediate effects and long-term implications. It reshapes the future of AI development in the United States. From the Trump administration's perspective, this decision removes bureaucratic hurdles, accelerates innovation, and strengthens U.S. competitiveness in AI. Supporters argue that by reducing regulatory constraints, the repeal allows companies to move faster, lowers compliance costs, and attracts greater investment, particularly in automation and biotechnology. But on the other hand, without a federal framework, the risks associated with the development and use of AI technologies are increasing. Algorithmic bias, cybersecurity vulnerabilities, and the potential misuse of AI become harder to control without national oversight. Critics also warn of a weakening of worker and consumer protections, as the end of support programs could further deepen economic inequalities. In practical terms, regulation is becoming more fragmented. Without a federal framework, each state could, and likely will, develop its own AI laws, making compliance more complex for businesses operating nationwide. Some see this as an opportunity for regulatory experimentation, while others see it as a chance for opportunistic players to exploit loopholes or fear legal uncertainty and increased tensions with international partners. The revocation of EO 14110 also affects global AI governance, particularly in Europe. Transatlantic relations are likely to become strained, as the growing divergence between U.S. and European approaches will make regulatory cooperation more challenging. European companies may tighten their compliance standards to maintain consumer trust, which could influence their strategic decisions. In fact, the European Union may face pressure to adjust its AI Act, although its regulatory framework remains largely independent from that of the United States. The revocation of Executive Order 14110 is more than just a policy shift in the United States. It represents a strategic choice, favoring a deregulated model where innovation takes precedence over regulation. While this decision may help accelerate technological progress, it also leaves critical questions unanswered: Who will ensure the ethics, security, and transparency of AI in the United States? For Europe, this shift deepens the divide with the U.S. and strengthens its role as a "global regulator" through the AI Act. The European Union may find itself alone at the forefront of efforts to enforce strict AI regulations, risking a scenario where some companies favor the less restrictive U.S. market. More than a debate on regulation, this revocation raises a fundamental question: In the global AI race, should progress be pursued at all costs, or should every advancement be built on solid and ethical foundations? The choices made today will shape not only the future of the industry but also the role of democracies in the face of tech giants. The revocation of EO 14110 highlights a broader debate: who really shapes AI policy, the government or private interests? While the U.S. moves toward deregulation, California's AI safety bill (SB 1047) is taking the opposite approach, proposing strict oversight for advanced AI models. But as an investigation by Pirate Wires reveals, this push for regulation isn't without controversy. Dan Hendrycks, a key advocate for AI safety, co-founded Gray Swan, a company developing compliance tools that could directly benefit from SB 1047's mandates. This raises a crucial question: When policymakers and industry leaders are deeply intertwined, is AI regulation truly about safety, or about controlling the market? In the race to govern AI, transparency may be just as important as regulation itself.
[4]
Inside France's Effort to Shape the Global AI Conversation
One evening early last year, Anne Bouverot was putting the finishing touches on a report when she received an urgent phone call. It was one of French President Emmanuel Macron's aides offering her the role as his special envoy on artificial intelligence. The unpaid position would entail leading the preparations for the France AI Action Summit -- a gathering where heads of state, technology CEOs, and civil society representatives will seek to chart a course for AI's future. Set to take place on Feb. 10 and 11 at the presidential Élysée Palace in Paris, it will be the first such gathering since the virtual Seoul AI Summit in May -- and the first in-person meeting since November 2023, when world leaders descended on Bletchley Park for the U.K.'s inaugural AI Safety Summit. After weighing the offer, Bouverot, who was at the time the co-chair of France's AI Commission, accepted. But France's Summit won't be like the others. While the U.K.'s Summit centered on mitigating catastrophic risks -- such as AI aiding would-be terrorists in creating weapons of mass destruction, or future systems escaping human control -- France has rebranded the event as the 'AI Action Summit,' shifting the conversation towards a wider gamut of risks -- including the disruption of the labor market and the technology's environmental impact -- while also keeping the opportunities front and center. "We're broadening the conversation, compared to Bletchley Park," Bouverot says. Attendees expected at the Summit include OpenAI boss Sam Altman, Google chief Sundar Pichai, European Commission president Ursula von der Leyen, German Chancellor Olaf Scholz and U.S. Vice President J.D. Vance. Some welcome the pivot as a much-needed correction to what they see as hype and hysteria around the technology's dangers. Others, including some of the world's foremost AI scientists -- including some who helped develop the field's fundamental technologies -- worry that safety concerns are being sidelined. "The view within the community of people concerned about safety is that it's been downgraded," says Stuart Russell, a professor of electrical engineering and computer sciences at the University of California, Berkeley, and the co-author of the authoritative textbook on AI used at over 1,500 universities. "On the face of it, it looks like the downgrading of safety is an attempt to say, 'we want to charge ahead, we're not going to over-regulate. We're not going to put any obligations on companies if they want to do business in France,"' Russell says. France's Summit comes at a critical moment in AI development, when the CEOs of top companies believe the technology will match human intelligence within a matter of years. If concerns about catastrophic risks are overblown, then shifting focus to immediate challenges could help prevent real harms while fostering innovation and distributing AI's benefits globally. But if the recent leaps in AI capabilities -- and emerging signs of deceptive behavior -- are early warnings of more serious risks, then downplaying these concerns could leave us unprepared for crucial challenges ahead. Bouverot is no stranger to the politics of emerging technology. In the early 2010s, she held the director general position at the Global System for Mobile Communications Association, an industry body that promotes interoperable standards among cellular providers globally. "In a nutshell, that role -- which was really telecommunications -- was also diplomacy," she says. From there, she took the helm at Morpho (now IDEMIA), steering the French facial recognition and biometrics firm until its 2017 acquisition. She later co-founded the Fondation Abeona, a nonprofit that promotes "responsible AI." Her work there led to her appointment as co-chair of France's AI Commission, where she developed a strategy for how the nation could establish itself as a global leader in AI. Bouverot's growing involvement with AI was, in fact, a return to her roots. Long before her involvement in telecommunications, in the early 1990s, Bouverot earned a PhD in AI at the Ecole normale supérieure -- a top French university that would later produce French AI frontrunner Mistral AI CEO Arthur Mensch. After graduating, Bouverot figured AI was not going to have an impact on society anytime soon, so she shifted her focus. "This is how much of a crystal ball I had," she joked on Washington AI Network's podcast in December, acknowledging the irony of her early skepticism, given AI's impact today. Under Bouverot's leadership, safety will remain a feature, but rather than the summit's sole focus, it is now one of five core themes. Others include: AI's use for public good, the future of work, innovation and culture, and global governance. Sessions run in parallel, meaning participants will be unable to attend all discussions. And unlike the U.K. summit, Paris's agenda does not mention the possibility that an AI system could escape human control. "There's no evidence of that risk today," Bouverot says. She says the U.K. AI Safety Summit occurred at the height of the generative AI frenzy, when new tools like ChatGPT captivated public imagination. "There was a bit of a science fiction moment," she says, adding that the global discourse has since shifted. Back in late 2023, as the U.K.'s summit approached, signs of a shift in the conversation around AI's risks were already emerging. Critics dismissed the event as alarmist, with headlines calling it "a waste of time" and a "doom-obsessed mess." Researchers, who had studied AI's downsides for years felt that the emphasis on what they saw as speculative concerns drowned out immediate harms like algorithmic bias and disinformation. Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, who was present at Bletchley Park, says the focus on existential risk "was really problematic." "Part of the issue is that the existential risk concern has drowned out a lot of the other types of concerns," says Margaret Mitchell, chief AI ethics scientist at Hugging Face, a popular online platform for sharing open-weight AI models and datasets. "I think a lot of the existential harm rhetoric doesn't translate to what policy makers can specifically do now," she adds. On the U.K. Summit's opening day, then-U.S. Vice President, Kamala Harris, delivered a speech in London: "When a senior is kicked off his health care plan because of a faulty A.I. algorithm, is that not existential for him?" she asked, in an effort to highlight the near-term risks of AI over the summit's focus on the potential threat to humanity. Recognizing the need to reframe AI discussions, Bouverot says the France Summit will reflect the change in tone. "We didn't make that change in the global discourse," Bouverot says, adding that the focus is now squarely on the technology's tangible impacts. "We're quite happy that this is actually the conversation that people are having now." One of the actions expected to emerge from France's Summit is a new yet-to-be-named foundation that will aim to ensure AI's benefits are widely distributed, such as by developing public datasets for underrepresented languages, or scientific databases. Bouverot points to AlphaFold, Google DeepMind's AI model that predicts protein structures with unprecedented precision -- potentially accelerating research and drug discovery -- as an example of the value of public datasets. AlphaFold was trained on a large public database to which biologists had meticulously submitted findings for decades. "We need to enable more databases like this," Bouverot says. Additionally, the foundation will focus on developing talent and smaller, less computationally intensive models, in regions outside the small group of countries that currently dominate AI's development. The foundation will be funded 50% by partner governments, 25% by industry, and 25% by philanthropic donations, Bouverot says. Her second priority is creating an informal "Coalition for Sustainable AI." AI is fueling a boom in data centers, which require energy, and often water for cooling. The coalition will seek to standardize measures for AI's environmental impact, and incentivize the development of more efficient hardware and software through rankings and possibly research prizes. "Clearly AI is happening and being developed. We want it to be developed in a sustainable way," Bouverot says. Several companies, including Nvidia, IBM, and Hugging Face, have already thrown their weight behind the initiative. Sasha Luccioni, AI & climate lead at Hugging Face, and a leading voice on AI's climate impact, says she is hopeful that the coalition will promote greater transparency. She says that currently, calculating the AI's emissions is made more challenging because often companies do not share how long a model was trained for, while data center providers do not publish specifics on GPU -- the kind of computer chips used for running AI -- energy usage. "Nobody has all of the numbers," she says, but the coalition may help put the pieces together. Given AI's recent pace of development, some fear severe risks could materialize rapidly. The core concern is that artificial general intelligence, or AGI -- a system that surpasses humans in most regards -- could potentially outmaneuver any constraints designed to control it, perhaps permanently disempowering humanity. Experts disagree about how quickly -- if ever -- we'll reach that technological threshold. But many leaders of the companies seeking to build human-level systems expect to succeed soon. In January, OpenAI's Altman wrote in a blog post: "We are now confident we know how to build AGI." Speaking on a panel at Davos last month, Dario Amodei, the CEO of rival AI company, Anthropic, said that AI could surpass human intelligence in almost all things as soon as next year. Those same titans of industry have made no secret of what they believe is at stake. Amodei has previously said he places a 10% to 25% likelihood that AI causes a societal-scale catastrophe. In 2015, months before co-founding OpenAI, Altman said "AI will probably most likely lead to the end of the world, but in the meantime, there'll be great companies." More recently, Altman has downplayed AI's risks. Meanwhile, a string of safety staff have departed OpenAI, citing concerns over the company's direction. "Over the past years, safety culture and processes have taken a backseat to shiny products," former OpenAI staffer Jan Leike posted to X in May, after resigning. He previously co-led a now defunct team charged with pioneering techniques to keep smarter-than-human AI systems under humanity's control. Predictions of AGI's imminent arrival could prove premature. The tech industry is notorious for bold claims and hype cycles crafted to court investors, and progress could still plateau, leaving AGI a distant prospect. Nevertheless, over the past year, AI models have demonstrated notable advances. At the beginning of 2024, the most capable model scored only 2% on a benchmark consisting of real world software engineering problems; by October, a new model achieved 49%; by December, an astonishing 71.7%. Similar leaps in performance have occurred in competition math and PhD-level science questions, with current state-of-the-art comfortably surpassing 85%. Though benchmarks are imperfect proxies for intelligence, the wider trend is unmistakable. Some observers note that this rapid progress has largely gone unnoticed by the public, perhaps because few people use AI for complex math or science in daily life. Companies including OpenAI, Anthropic, and Google are shifting their focus from user-prompted systems to creating autonomous virtual agents that can operate computers and solve problems independently. This recent rise in intelligence has been accompanied by new deceptive abilities. In December, an experiment conducted by Apollo Research revealed that when confronted with a fictional scenario where the AI was threatened with shutdown, cutting-edge models, including OpenAI's recent o1 and a version of Anthropic's Claude, tried (unsuccessfully) to copy themselves to a new server. When researchers asked the models why they had tried to replicate themselves, they sometimes lied to avoid being caught -- often doubling down when confronted with their lies. The same month, a test by Anthropic and the nonprofit Redwood Research, showed a version of Anthropic's model, Claude, strategically misleading its creators during training. "Exactly the things that people have been writing about for the last 10 years are happening now," Russell says. "As the systems are increasing their ability to reason, we see that indeed they can figure out how to escape. They can lie about it while they're doing it, and so on." Yoshua Bengio, founder and scientific director of Mila Quebec AI Institute, and often referred to as one of the three "Godfathers of AI" for his pioneering work in deep learning, says that while within the business community there is a sense that the conversation has moved on from autonomy risks, recent developments have caused growing concerns within the scientific community. Although expert opinion varies widely on the likelihood, he says the possibility of AI escaping human control can no longer be dismissed as mere science fiction. Bengio led the International AI Safety Report 2025, an initiative modeled after U.N. climate assessments and backed by 30 countries, the U.N., E.U., and the OECD. Published last month, the report synthesizes scientific consensus on the capabilities and risks of frontier AI systems. "There's very strong, clear, and simple evidence that we are building systems that have their own goals and that there is a lot of commercial value to continue pushing in that direction," Bengio says. "A lot of the recent papers show that these systems have emergent self-preservation goals, which is one of the concerns with respect to the unintentional loss of control risk," he adds. At previous summits, limited but meaningful steps were taken to reduce loss-of-control and other risks. At the U.K. Summit, a handful of companies committed to share priority access to models with governments for safety testing prior to public release. Then, at the Seoul AI Summit, 16 companies, across the U.S., China, France, Canada, and South Korea signed voluntary commitments to identify, assess and manage risks stemming from their AI systems. "They did a lot to move the needle in the right direction," Bengio says, but he adds that these measures are not close to sufficient. "In my personal opinion, the magnitude of the potential transformations that are likely to happen once we approach AGI are so radical," Bengio says, "that my impression is most people, most governments, underestimate this whole lot." But rather than pushing for new pledges, in Paris the focus will be streamlining existing ones -- making them compatible with existing regulatory frameworks and each other. "There's already quite a lot of commitments for AI companies," Bouverot says. This light-touch stance mirrors France's broader AI strategy, where homegrown company Mistral AI has emerged as Europe's leading challenger in the field. Both Mistral and the French government lobbied for softer regulations under the E.U.'s comprehensive AI Act. France's Summit will feature a business-focused event, hosted across town at Station F, France's largest start-up hub. "To me, it looks a lot like they're trying to use it to be a French industry fair," says Andrea Miotti, the executive director of Control AI, a non-profit that advocates for guarding against existential risks from AI. "They're taking a summit that was focused on safety and turning it away. In the rhetoric, it's very much like: let's stop talking about the risks and start talking about the great innovation that we can do." The tension between safety and competitiveness is playing out elsewhere, including India, which, it was announced last month, will co-chair France's Summit. In March, India issued an advisory that pushed companies to obtain the government's permission before deploying certain AI models, and take steps to prevent harm. It then swiftly reserved course after receiving sharp criticism from industry. In California -- home to many of the top AI developers -- a landmark bill, which mandated that the largest AI developers implement safeguards to mitigate catastrophic risks, garnered support from a wide coalition, including Russell and Bengio, but faced pushback from the open-source community and a number of tech giants including OpenAI, Meta, and Google. In late August, the bill passed both chambers of California's legislature with strong majorities but in September it was vetoed by governor Gavin Newsom who argued the measures could stifle innovation. In January, President Donald Trump repealed the former President Joe Biden's sweeping Executive Order on artificial intelligence, which had sought to tackle threats posed by the technology. Days later, Trump replaced it with an Executive Order that "revokes certain existing AI policies and directives that act as barriers to American AI innovation" to secure U.S. leadership over the technology. Markus Anderljung, director of policy and research at AI safety think-tank the Centre for the Governance of AI, says that safety could be woven into the France Summit's broader goals. For instance, initiatives to distribute AI's benefits globally might be linked to commitments from recipient countries to uphold safety best practices. He says he would like to see the list of signatories of the Frontier AI Safety Commitments signed in Seoul expanded -- particularly in China, where only one company, Zhipu, has signed. But Anderljung says that for the commitments to succeed, accountability mechanisms must also be strengthened. "Commitments without follow-ups might just be empty words," he says, "they just don't matter unless you know what was committed to actually gets done." A focus on AI's extreme risks does not have to come at the exclusion of other important issues. "I know that the organizers of the French summit care a lot about [AI's] positive impact on the global majority," Bengio says. "That's a very important mission that I embrace completely." But he argues the potential severity of loss-of-control risks warrant invoking precautionary principle -- the idea that we should take preventive measures, even absent scientific consensus. It's a principle that has been invoked by U.N. declarations aimed at protecting the environment, and in sensitive scientific domains like human cloning. But for Bouverot, it is a question of balancing competing demands. "We don't want to solve everything -- we can't, nobody can," she says, adding that the focus is on making AI more concrete. "We want to work from the level of scientific consensus, whatever level of consensus is reached." In mid December, in France's foreign ministry, Bouverot, faced an unusual dilemma. Across the table, a South Korean official explained his country's eagerness to join the summit. But days earlier, South Korea's political leadership was thrown into turmoil when President Yoon Suk Yeol, who co-chaired the previous summit's leaders' session, declared martial law before being swiftly impeached, leaving the question of who will represent the country -- and whether officials could attend at all -- up in the air. There is a great deal of uncertainty -- not only over the pace AI will advance, but to what degree governments will be willing to engage. France's own government collapsed in early December after Prime Minister Michel Barnier was ousted in a no-confidence vote, marking the first such collapse since the 1960s. And, as Trump, long skeptical of international institutions, returns to the oval office, it is yet to be seen how Vice President Vance will approach the Paris meeting. When reflecting on the technology's uncertain future, Bouverot finds wisdom in the words of another French pioneer who grappled with powerful but nascent technology. "I have this quote from Marie Curie, which I really love," Bouverot says. Curie, the first woman to win a Nobel Prize, revolutionized science with her work on radioactivity. She once wrote: "Nothing in life is to be feared, it is only to be understood." Curie's work ultimately cost her life -- she died at a relatively young 66 from a rare blood disorder, likely caused by prolonged radiation exposure.
Share
Share
Copy Link
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
The United States has undergone a significant shift in its approach to artificial intelligence (AI) regulation, with the new administration revoking Executive Order 14110, a cornerstone of the previous government's AI policy. This move signals a dramatic change from a regulatory framework focused on responsible development to one prioritizing "AI dominance" 1.
The revocation of EO 14110 marks a departure from initiatives that emphasized AI safety, ethical considerations, and consumer protection. The new administration's single-line directive on AI states that the US must "sustain and enhance America's global AI dominance in order to promote human flourishing, economic competitiveness, and national security" 1.
This policy shift has led to significant changes in government agencies:
The policy change highlights diverging global approaches to AI regulation:
Critics argue that this deregulatory approach may lead to:
Experts like Stuart Russell from UC Berkeley express concern that safety considerations are being downgraded, potentially leaving the US unprepared for crucial challenges ahead 4.
The US policy shift occurs as other nations continue to shape the global AI conversation. France, for instance, is preparing to host the AI Action Summit, broadening the discussion beyond catastrophic risks to include labor market disruption and environmental impacts 4.
As AI capabilities rapidly advance, with some CEOs believing human-level AI could be achieved within years, the debate over appropriate governance strategies intensifies. The US approach now contrasts sharply with initiatives like the AI Action Summit, which aims to address a wider range of AI-related challenges and opportunities 4.
The US pivot towards "AI dominance" represents a significant departure from previous regulatory efforts. While proponents argue this will accelerate innovation and economic growth, critics warn of potential risks to safety, ethics, and international cooperation. As the global AI landscape continues to evolve, the impact of this policy shift on technological development, societal implications, and international relations remains to be seen.
Reference
[2]
[3]
The US and UK are navigating complex AI regulatory landscapes, with the US imposing new export controls and the UK seeking a middle ground between US and EU approaches.
2 Sources
2 Sources
In 2024, AI made significant strides in capabilities and adoption, driving massive profits for tech companies. However, concerns about safety, regulation, and societal impact also grew.
13 Sources
13 Sources
Reports of potential layoffs at the US AI Safety Institute have sparked alarm in the tech industry, raising questions about the future of AI regulation and safety measures in the United States.
4 Sources
4 Sources
An exploration of the complex landscape surrounding AI development, including political implications, economic impacts, and societal concerns, highlighting the need for responsible innovation and regulation.
2 Sources
2 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved