The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved
Curated by THEOUTPOST
On Thu, 24 Oct, 12:08 AM UTC
4 Sources
[1]
Letter: California's AI bill matters
California's artificial intelligence safety bill (SB 1047) represented America's best opportunity to date to establish guardrails and rules of the road for emerging artificial intelligence (AI) technology. The governor's veto ignores the urgent need for proactive measures to mitigate the risks of advanced AI (FT View, October 3). Your editorial rightly notes that it would "be better if safety rules were hashed out and enacted at a federal level". However, with Congress bogged down in election-year politicking, states cannot afford to wait any longer. It may sound superficially appealing to limit regulation of AI to "high-risk environments", but advanced AI is a general-purpose tool that cannot be easily confined to any one industry. By their very nature, neural networks have the potential to learn and perform a wide range of tasks. For instance, an AI model designed for document translation could end up controlling critical systems like our power grid, cell towers, weapons systems and stock markets. SB 1047 would have introduced much-needed accountability measures for large AI companies, such as requiring companies spending over $100mn on AI model training to implement basic safety protocols and maintain the ability to shut down potentially harmful systems. We don't need to "rework the proposed rules", we need to swiftly pass them into law, whether at the state or federal level. The urgency of AI safety cannot be overstated, and America must act now to prevent serious harm from advanced AI.
[2]
California's governor blocked landmark AI safety laws. Here's why it's such a key ruling for the future of AI worldwide
In a world where artificial intelligence is rapidly shaping the future, California has found itself at a critical juncture. The US state's governor, Gavin Newsom, recently blocked a key AI safety bill aimed at tightening regulations on generative AI development. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was seen by many as a necessary safeguard on the technology's development. Generative AI covers systems that produce new content in text, video, images and music -- often in response to questions, or "prompts", by a user. But Newsom said the bill risked "curtailing the very innovation that fuels advancement in favor of the public good". While agreeing the public needs to be protected from threats posed by the technology, he argued that SB 1047 was not "the best approach". What happens in California is so important because it is the home of Silicon Valley. Of the world's top 50 AI companies, 32 are currently headquartered within the state. California's legislature therefore has a unique role in efforts to ensure the safety of AI-based technology. But Newsom's decision also reflects a deeper question: can innovation and safety truly coexist, or do we have to sacrifice one to advance the other? California's tech industry contributes billions of dollars to the state's economy and generates thousands of jobs. Newsom, along with prominent tech investors such as Marc Andreessen, believes too many regulations could slow down AI's growth. Andreessen praised the veto, saying it supports "economic growth and freedom" over excessive caution. However, rapidly advancing AI technologies could bring serious risks, from spreading disinformation to enabling sophisticated cyberattacks that could harm society. One of the significant challenges is understanding just how powerful today's AI systems have become. Generative AI models, like OpenAI's GPT-4, are capable of complex reasoning and can produce human-like text. AI can also create incredibly realistic fake images and videos, known as deepfakes, which have the potential to undermine trust in the media and disrupt elections. For example, deepfake videos of public figures could be used to spread disinformation, leading to confusion and mistrust. AI-generated misinformation could also be used to manipulate financial markets or incite social unrest. The unsettling part is that no one knows exactly what's coming next. These technologies open doors for innovation -- but without proper regulation, AI tools could be misused in ways that are difficult to predict or control. Traditional methods of testing and regulating software fall short when it comes to generative AI tools that can create artificial images or video. These systems evolve in ways that even their creators can't fully anticipate, especially after being trained on vast amounts of data from interactions with millions of people, such as ChatGPT. SB 1047 sought to address this concern by requiring companies to implement "kill switches" in their AI software that can deactivate the technology in the even of a problem. The law would also have required them to create detailed safety plans for any AI project with a budget over US$100 million (£77.2m). Critics said the bill was too broad, meaning it could affect even lower-risk projects. But its main goal was to set up basic protections in an industry that's arguably moving faster than lawmakers can keep up with. California as a global leader What California decides could affect the world. As a global tech leader, the state's approach to regulating AI could set a standard for other countries, as it has done in the past. For example, California's leadership in setting stringent vehicle emissions standards through the California Consumer Privacy Act (CCPA), and its early regulation of self-driving cars, have influenced other states and countries to adopt similar measures. But by vetoing SB 1047, California may have sent a message that it's not ready to lead the way in AI regulation. This could leave room for other countries to step in -- countries that may not care as much as the US about ethics and public safety. Tesla's CEO, Elon Musk, had cautiously supported the bill, acknowledging that while it was a "tough call", it was probably a good idea. His stance shows that even tech insiders recognize the risks AI poses. This might be a sign the industry is ready to work with policymakers on how best to regulate this new breed of technology. The notion that regulation automatically stifles innovation is misleading. Effective laws can create a framework that not only protects people, but allows AI to grow sustainably. For example, regulations can help ensure that AI systems are developed responsibly, with considerations for privacy, fairness and transparency. This can build public trust, which is essential for the widespread adoption of AI technologies. The future of AI doesn't have to be a choice between innovation and safety. By implementing reasonable safeguards, we can unlock the full potential of AI while keeping society safe. Public engagement is crucial in this process. People need to be informed about AI's capabilities and risks to participate in shaping policies that reflect society's values. The stakes are high and AI is advancing rapidly. It's time for proactive action to ensure we reap the benefits of AI without compromising our safety. But California's killing of the AI bill also raises a wider question on the increasing power and influence of tech companies, given they raised objections that subsequently led to its veto.
[3]
California's governor blocked landmark AI safety laws. Here's why it's such a key ruling for the future of AI worldwide
University of Bradford provides funding as a founding partner of The Conversation UK. In a world where artificial intelligence is rapidly shaping the future, California has found itself at a critical juncture. The US state's governor, Gavin Newsom, recently blocked a key AI safety bill aimed at tightening regulations on generative AI development. The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) was seen by many as a necessary safeguard on the technology's development. Generative AI covers systems that produce new content in text, video, images and music - often in response to questions, or "prompts", by a user. But Newsom said the bill risked "curtailing the very innovation that fuels advancement in favour of the public good". While agreeing the public needs to be protected from threats posed by the technology, he argued that SB 1047 was not "the best approach". What happens in California is so important because it is the home of Silicon Valley. Of the world's top 50 AI companies, 32 are currently headquartered within the state. California's legislature therefore has a unique role in efforts to ensure the safety of AI-based technology. But Newsom's decision also reflects a deeper question: can innovation and safety truly coexist, or do we have to sacrifice one to advance the other? California's tech industry contributes billions of dollars to the state's economy and generates thousands of jobs. Newsom, along with prominent tech investors such as Marc Andreessen, believes too many regulations could slow down AI's growth. Andreessen praised the veto, saying it supports "economic growth and freedom" over excessive caution. However, rapidly advancing AI technologies could bring serious risks, from spreading disinformation to enabling sophisticated cyberattacks that could harm society. One of the significant challenges is understanding just how powerful today's AI systems have become. Generative AI models, like OpenAI's GPT-4, are capable of complex reasoning and can produce human-like text. AI can also create incredibly realistic fake images and videos, known as deepfakes, which have the potential to undermine trust in the media and disrupt elections. For example, deepfake videos of public figures could be used to spread disinformation, leading to confusion and mistrust. AI-generated misinformation could also be used to manipulate financial markets or incite social unrest. The unsettling part is that no one knows exactly what's coming next. These technologies open doors for innovation - but without proper regulation, AI tools could be misused in ways that are difficult to predict or control. Traditional methods of testing and regulating software fall short when it comes to generative AI tools that can create artificial images or video. These systems evolve in ways that even their creators can't fully anticipate, especially after being trained on vast amounts of data from interactions with millions of people, such as ChatGPT. SB 1047 sought to address this concern by requiring companies to implement "kill switches" in their AI software that can deactivate the technology in the even of a problem. The law would also have required them to create detailed safety plans for any AI project with a budget over US$100 million (£77.2m). Critics said the bill was too broad, meaning it could affect even lower-risk projects. But its main goal was to set up basic protections in an industry that's arguably moving faster than lawmakers can keep up with. California as a global leader What California decides could affect the world. As a global tech leader, the state's approach to regulating AI could set a standard for other countries, as it has done in the past. For example, California's leadership in setting stringent vehicle emissions standards through the California Consumer Privacy Act (CCPA), and its early regulation of self-driving cars, have influenced other states and countries to adopt similar measures. But by vetoing SB 1047, California may have sent a message that it's not ready to lead the way in AI regulation. This could leave room for other countries to step in - countries that may not care as much as the US about ethics and public safety. Tesla's CEO, Elon Musk, had cautiously supported the bill, acknowledging that while it was a "tough call", it was probably a good idea. His stance shows that even tech insiders recognise the risks AI poses. This might be a sign the industry is ready to work with policymakers on how best to regulate this new breed of technology. The notion that regulation automatically stifles innovation is misleading. Effective laws can create a framework that not only protects people, but allows AI to grow sustainably. For example, regulations can help ensure that AI systems are developed responsibly, with considerations for privacy, fairness and transparency. This can build public trust, which is essential for the widespread adoption of AI technologies. The future of AI doesn't have to be a choice between innovation and safety. By implementing reasonable safeguards, we can unlock the full potential of AI while keeping society safe. Public engagement is crucial in this process. People need to be informed about AI's capabilities and risks to participate in shaping policies that reflect society's values. The stakes are high and AI is advancing rapidly. It's time for proactive action to ensure we reap the benefits of AI without compromising our safety. But California's killing of the AI bill also raises a wider question on the increasing power and influence of tech companies, given they raised objections that subsequently led to its veto.
[4]
California AI bill blocked by governor, and other digital technology stories you need to know
Top digital technology stories: AI bill blocked by California governor; US-UK to set up joint children's online safety working group; India central bank governor warns on financial stability risks of AI. California's governor, Gavin Newsom, has blocked a controversial artificial intelligence safety bill, which had generated strong opposition from major technology companies. The bill would have imposed some of the first legislation on AI in the United States, reports the BBC. In a statement, the governor explained: "I do not believe this is the best approach to protecting the public from real threats posed by the technology." He said that any AI regulation needed to "keep pace with the technology itself" and Newsom said he was committed to finding a path forward that mitigates risks and protects against threats without "thwarting the promise of this technology to advance the public good". The bill would have required the most advanced AI models to undergo safety testing and forced developers to include a 'kill switch' to isolate and cut off an AI system. The United States and United Kingdom governments have announced they will set up and launch a new working group on children's online safety. In a statement, the UK government said there was currently "limited research and evidence on the causal impact that social media has on children and young people". Once the group is established, it will work in areas including increased transparency from platforms. The countries will also share expertise on safety technologies and the impact of generative AI. "The online world brings incredible benefits for young people, enriching their education and social lives. But these experiences must take place in an environment which has safety baked in from the outset, not as an afterthought," UK Technology Secretary Peter Kyle said. A study has found that newer, larger versions of three major artificial intelligence chatbots are more likely to generate wrong answers than to admit ignorance. It also found that, in general, people aren't good at spotting the bad answers. Microsoft's head of AI has said that AI assistants with "really good long-term memory" are around a year away. "I think we're moving to a fundamentally new age where there will be ever-present, persistent, very capable co-pilot companions in your everyday life," he said, according to BBC reporting. Fourteen attorneys general in the United States have sued TikTok, which says the social media platform is helping drive a teen mental health crisis. The platform called it "disappointing". Brazil has lifted a ban on the social media platform X - formerly Twitter - after the site paid fines and agreed to appoint a local representative. The US Federal Trade Commission has announced actions against five businesses it said were using AI in deceptive and unfair ways. Google has announced plans to buy power from multiple small modular reactors to help meet the electricity demands of artificial intelligence. The Governor of the Reserve Bank of India, Shaktikanta Das, has warned of the financial risks of AI. The growing use of technology, including AI and machine learning, in financial services could impact financial stability, he told an event in New Delhi. What will determine AI's contribution to productivity growth? And how can policy help shape them? Experts identify three key areas in this piece published for our Annual Meeting of the Global Future Councils. Thomas Crampton, the Co-Founder of Crampton Blackie Partners, explores four challenges that tech innovators face when engaging with investors - and looks at how to overcome them. How can equitable data practices help shape the future of urban planning? Current urban AI systems do not prioritize inclusion, equity and participation in the innovation process. Here's how equitable and sustainable cities could be achieved by advancing data equity in practice.
Share
Share
Copy Link
California Governor Gavin Newsom vetoes the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), igniting discussions on balancing AI innovation with safety measures.
California Governor Gavin Newsom has vetoed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), a landmark AI safety bill that aimed to establish guardrails for emerging AI technologies 1. The bill, which would have introduced accountability measures for large AI companies, was seen as America's best opportunity to date for setting rules of the road for AI 1.
The vetoed bill included several important provisions:
Governor Newsom argued that the bill risked "curtailing the very innovation that fuels advancement in favor of the public good" 2. This stance aligns with prominent tech investors like Marc Andreessen, who praised the veto as supporting "economic growth and freedom" over excessive caution 3.
The veto has sparked a debate on balancing innovation with safety:
Potential Risks: Critics argue that rapidly advancing AI technologies could lead to serious risks, including the spread of disinformation and sophisticated cyberattacks 3.
Global Impact: As home to 32 of the world's top 50 AI companies, California's decision could influence AI regulation worldwide 3.
Industry Readiness: Tesla CEO Elon Musk's cautious support for the bill suggests that some tech insiders recognize the need for regulation 3.
While the veto has been criticized, it has also opened up discussions on alternative approaches to AI regulation:
Federal Action: Some argue that safety rules should be enacted at the federal level for consistency 1.
Balanced Approach: Experts suggest that effective laws can create a framework that protects people while allowing AI to grow sustainably 3.
Public Engagement: There's a growing call for increased public awareness and participation in shaping AI policies 3.
As AI continues to advance rapidly, the debate over regulation versus innovation remains at the forefront of tech policy discussions. The veto of SB 1047 highlights the complex challenges in governing emerging technologies and the need for a nuanced approach that balances safety concerns with technological progress.
Reference
[1]
[2]
[3]
[4]
California Governor Gavin Newsom's veto of Senate Bill 1047, a proposed AI safety regulation, has ignited discussions about balancing innovation with public safety in the rapidly evolving field of artificial intelligence.
9 Sources
9 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
California Governor Gavin Newsom signs a groundbreaking AI safety bill into law, sparking debate among tech leaders and policymakers about the future of AI regulation and its impact on innovation.
2 Sources
2 Sources
California's legislature has approved a groundbreaking bill to regulate large AI models, setting the stage for potential nationwide standards. The bill, if signed into law, would require companies to evaluate AI systems for risks and implement mitigation measures.
7 Sources
7 Sources
A groundbreaking artificial intelligence regulation bill has passed the California legislature and now awaits Governor Gavin Newsom's signature. The bill, if signed, could set a precedent for AI regulation in the United States.
14 Sources
14 Sources