The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Tue, 23 Jul, 4:03 PM UTC
2 Sources
[1]
How's AI self-regulation going?
Yesterday, on July 21, President Joe Biden announced he is stepping down from the race against Donald Trump in the US presidential election. But AI nerds may remember that exactly a year ago, on July 21, 2023, Biden was posing with seven top tech executives at the White House. He'd just negotiated a deal where they agreed to eight of the most prescriptive rules targeted at the AI sector at that time. A lot can change in a year! The voluntary commitments were hailed as much-needed guidance for the AI sector, which was building powerful technology with few guardrails. Since then, eight more companies have signed the commitments, and the White House has issued an executive order that expands upon them -- for example, with a requirement that developers share safety test results for new AI models with the US government if the tests show that the technology could pose a risk to national security. US politics is extremely polarized, and the country is unlikely to pass AI regulation anytime soon. So these commitments, along with some existing laws such as antitrust and consumer protection rules, are the best the US has in terms of protecting people from AI harms. To mark the one-year anniversary of the voluntary commitments, I decided to look at what's happened since. I asked the original seven companies that signed the voluntary commitments to share as much as they could on what they have done to comply with them, cross-checked their responses with a handful of external experts, and tried my best to provide a sense of how much progress has been made. You can read my story here. Silicon Valley hates being regulated and argues that it hinders innovation. Right now, the US is relying on the tech sector's goodwill to protect its consumers from harm, but these companies can decide to change their policies anytime that suits them and face no real consequences. And that's the problem with nonbinding commitments: They are easy to sign, and as easy to forget. That's not to say they don't have any value. They can be useful in creating norms around AI development and placing public pressure on companies to do better. In just one year, tech companies have implemented some positive changes, such as AI red-teaming, watermarking, and investment in research on how to make AI systems safe. However, these sorts of commitments are opt-in only, and that means companies can always just opt back out again. Which brings me to the next big question for this field: Where will Biden's successor take US AI policy? The debate around AI regulation is unlikely to go away if Donald Trump wins the presidential election in November, says Brandie Nonnecke, the director of the CITRIS Policy Lab at UC Berkeley. "Sometimes the parties have different concerns about the use of AI. One might be more concerned about workforce effects, and another might be more concerned about bias and discrimination," says Nonnecke. "It's clear that it is a bipartisan issue that there need to be some guardrails and oversight of AI development in the United States," she adds. Trump is no stranger to AI. While in office, he signed an executive order calling for more investment in AI research and asking the federal government to use more AI, coordinated by a new National AI Initiative Office. He also issued early guidance on responsible AI. If he returns to office, he is reportedly planning to scratch Biden's executive order and put in place his own AI executive order that reduces AI regulation and sets up a "Manhattan Project" to boost military AI. Meanwhile, Biden keeps calling for Congress to pass binding AI regulations. It's no surprise, then, that Silicon Valley's billionaires have backed Trump.
[2]
Biden bows out: Kamala Harris takes lead as AI regulations and semiconductor debates heat up
On July 21, US President Joe Biden officially announced his decision to withdraw from the 2024 presidential race amid growing pressure from within the Democratic Party. Biden stated that stepping down was in the best interest of the party and the nation, and he endorsed Vice President Kamala Harris as his successor. This move introduces new uncertainties into the 2024 election, in which AI regulations and semiconductors have become heavily debated topics for the political battleground. AI regulations did not fall out of the coconut tree Notably, if Harris emerges as the official Democratic nominee, her stance on AI regulation will be a focal point. Unlike Biden, Harris has been more outspoken about the need for regulations to address potential AI risks, including deepfakes, algorithmic bias, and misinformation. Her approach to AI regulation and its impact on semiconductor supply chains will be closely watched. According to Bloomberg, Biden's decision to withdraw is aimed at uniting the Democratic Party to defeat former President Donald Trump. In a statement on July 21, Harris expressed her intention to secure the nomination writing, "my intention is to earn and win this nomination" on social media platform X (formerly Twitter). The Democratic Party has yet to finalize a replacement for Biden, but Harris is a leading contender, receiving support from former President Bill Clinton and Presidential nominee Hillary Clinton. Born in Oakland, California, Harris has long been associated with the tech industry. She served as a San Francisco District Attorney and California Attorney General. She was subsequently elected to the US Senate in 2016. As Vice President, Harris has been clear about the need for stronger AI regulations. At the AI Safety Summit in the UK in November 2023, she remarked, "As history has shown, in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the well-being of their customers, the safety of our communities, and the stability of our democracies." Politico reported that during early discussions on AI policy in July 2023, Harris emphasized that there should be no "false choice" between fostering innovation and protecting the public. In contrast, Biden has primarily advocated for self-regulation by AI companies. In October 2023, he issued a broad executive order on AI, which has faced criticism from Republicans who argue that it imposes excessive regulatory burdens on tech firms. Trump: foe or friend to tech? Venture capitalists Marc Andreessen and Ben Horowitz have expressed concerns that the Biden administration's approach could stifle innovation, citing that as one of the reasons they are supporting Trump this election cycle. The Washington Post reported that Trump, during a podcast, mentioned he had heard from Silicon Valley "geniuses" about the need for more energy to fuel AI development in competition with China. Trump's allies are also preparing an executive order that would initiate a series of "Manhattan Projects" to advance military technology and review existing regulations deemed unnecessary. The GOP platform launched leading up to the Republican National Convention, includes plans to repeal Biden's AI executive order. Tech investors and startups have criticized this order as a hindrance to innovation. Prominent figures in Silicon Valley, such as Tesla CEO Elon Musk and hedge fund manager Bill Ackman, have thrown their support behind Trump, suggesting that a second Trump administration might have a more favorable stance toward the tech industry. In response to these proposed policies, the Trump campaign has stated that no future presidential staffing or policy announcements should be considered official unless directly issued by Trump or an authorized campaign member. Trump's recent comments criticizing Taiwanese semiconductor companies for allegedly diverting business from the US and demanding protection fees have cast a shadow over future US-Taiwan semiconductor collaboration. Taiwan-made chips, led by TSMC, are crucial to the AI hardware ecosystem. The potential impact of these comments and the November US presidential race on global AI development and supply chains remains to be seen.
Share
Share
Copy Link
An examination of the current state of AI self-regulation in the tech industry, highlighting the efforts made by major companies and the ongoing challenges faced in establishing effective oversight.
In recent months, the tech industry has been grappling with the challenge of regulating artificial intelligence (AI) as its rapid advancement continues to outpace traditional regulatory frameworks. Major tech companies have taken steps towards self-regulation, recognizing the need for responsible AI development and deployment 1.
Leading AI companies, including OpenAI, Anthropic, and Google, have made public commitments to participate in a U.S. government program aimed at ensuring the safe and ethical development of AI systems. This initiative involves subjective testing of AI models for potential risks and sharing the results with the government 1.
While self-regulation efforts are underway, there is growing recognition that government involvement is necessary to establish comprehensive AI governance. The Biden administration has been actively working on an executive order to address AI regulation, emphasizing the importance of balancing innovation with safety and ethical considerations 2.
Despite these initiatives, the effectiveness of self-regulation remains questionable. Critics argue that without clear standards and enforcement mechanisms, companies may prioritize their own interests over public safety. The subjective nature of risk assessment in AI systems also poses challenges in creating universally applicable guidelines 1.
The European Union has taken a more proactive stance on AI regulation with its AI Act, which aims to establish clear rules for AI development and use. This approach contrasts with the more industry-led efforts in the United States, highlighting the global diversity in regulatory approaches 2.
As AI technology continues to evolve, the debate over regulation intensifies. The tech industry's self-regulation efforts, while a step in the right direction, may not be sufficient to address all concerns. The coming months will be crucial in determining whether a balance can be struck between innovation and responsible AI development, with potential legislative actions on the horizon 1 2.
Reference
[1]
The upcoming US presidential election between Kamala Harris and Donald Trump could significantly impact AI regulation, investment, and the industry's future. Their contrasting approaches to AI policy highlight the stakes for tech companies, consumers, and America's global competitiveness in AI.
6 Sources
6 Sources
As the US government transitions to full Republican control, the future of AI regulations becomes uncertain. The new administration's focus on deregulation raises questions about the balance between innovation and safeguards in AI development.
5 Sources
5 Sources
Donald Trump's election victory signals potential shifts in AI policy, with promises to repeal Biden's executive order and promote deregulation, raising questions about the future of AI governance and innovation in the US.
29 Sources
29 Sources
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
A comparative analysis of the Trump and Harris administrations' stances on technology regulation, focusing on AI, cryptocurrencies, antitrust measures, and data privacy.
4 Sources
4 Sources