Curated by THEOUTPOST
On Wed, 23 Oct, 12:05 AM UTC
4 Sources
[1]
AI firms and civil society groups plead for federal AI law
Congress urged to act before year's end to support US competitiveness More than 60 commercial orgs, non-profits, and academic institutions have asked Congress to pass legislation authorizing the creation of the US AI Safety Institute within the National Institutes of Standards and Technology (NIST). Bills introduced previously in the US Senate and the House of Representatives - S 4178, the Future of AI Innovation Act, and HR 9497, the AI Advancement and Reliability Act - call for a NIST-run AI center focused on research, standards development, and public-private partnerships to advance artificial intelligence technology. The Senate bill, backed by senators Maria Cantwell (D-Wash.), Todd Young (R-Ind.), John Hickenlooper (D-Colo.), Marsha Blackburn (R-Tenn.), Ben Ray Luján (D-N.M.), Roger Wicker (R-Miss.) and Kyrsten Sinema (I-Ariz.), would formally establish the US AI Safety Institute, which already operates within NIST. The House bill, sponsored by Jay Obernolte (R-CA-23), Ted Lieu (D-CA-36), and Frank Lucas (R-OK-3), describes the NIST-based group as the Center for AI Advancement and Reliability. If approved by both legislative bodies, the two bills would be reconciled into a single piece of legislation for president Biden to sign. Whether this might happen at a time of historic Congressional inaction, amid a particularly consequential election cycle, is anyone's guess. The "Do Nothing" 118th Congress, which commenced on January 3, 2023 and will conclude on January 3, 2025, has been exceptionally unproductive - enacting just 320 pieces of legislation to date compared to an average of about 782. That's the smallest number of laws enacted in the past 50 years, which is as far as the records go at GovTrack.us. Undaunted, the aforementioned coalition, led by Americans for Responsible Innovation (ARI) and the Information Technology Industry Council (ITI), published an open letter [PDF] on Tuesday urging lawmakers to support NIST's efforts to address AI safety for the sake of national security and competitiveness. "As other governments quickly move ahead, Members of Congress can ensure that the US does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing US AI innovation and adoption," declared ITI president and CEO Jason Oxman in a statement. "We urge Congress to heed today's call to action from industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year." Signatories of the letter include: AI-focused platform providers like Amazon, Google, and Microsoft; defense contractors like Lockheed Martin and Palantir; model makers like Anthropic and OpenAI; advocacy groups like Public Knowledge; and academic institutions like Carnegie Mellon University. So this call to action has more to do with national policy goals and frameworks for assessing AI systems than creating enforceable limits, unlike California's SB 1047, which met resistance from the tech industry and was vetoed by state governor Gavin Newsom last month over concerns about the bill's effect on the state economy. Both the Senate and House bills call for the formulation of voluntary best practices. That sets them apart from SB 1047, which envisioned enforceable obligations to promote AI safety. California senator Scott Wiener, who introduced SB 1047, responded to Newsom's veto by saying, "While the large AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary commitments from industry are not enforceable and rarely work out well for the public. This veto leaves us with the troubling reality that companies aiming to create an extremely powerful technology face no binding restrictions from US policymakers, particularly given Congress's continuing paralysis around regulating the tech industry in any meaningful way." In the 70 days remaining in 2024, perhaps lawmakers will find a way to unite and pass federal AI legislation that tech companies themselves have endorsed. But probably not. ®
[2]
Leading tech companies call on Congress to approve AI safety institute
Oct. 22 (UPI) -- A group of leading technology companies and organizations in the country joined forces on Tuesday to call on Congress to permanently authorize the U.S. Artificial Intelligence Safety Institute by the end of the year. Calling for science-based standards to be implemented as guardrails as AI technology continues to grow at a breakneck pace, the 60 companies and groups said the institute, which would be located in the National Institutes of Standards and Technology, is needed immediately. Amazon, Anthropic, Google, Meta, Microsoft and OpenAI signed on to the letter to Congress from the Information Technology Industry Council and Americans for Responsible Innovation. The companies are among the leaders in the emerging and controversial field of AI. "This work at NIST is already underway, in close partnership with a cross-section of stakeholders," the letter said. "As other nations around the world are establishing their own AI safety institutes, furthering NIST's ongoing efforts is essential to advancing U.S. AI innovation, leadership and national security." The letter said House and the Senate committees have already advanced their own bipartisan bill to address some of the issues concerning them -- Senate Bill 4178, the Future of AI Innovation Act, and House Resolution 9497, the AI Advancement and Reliability Act. The letter said the United States has an opportunity to lead the world in AI safety or concede that leadership to other countries. "Now the two chambers must come together on final legislation to codify and provide resources for a NIST U.S. AI Safety Institute focused on research and development and pre-deployment testing and evaluation of the resilience and safety of AI systems," the letter said.
[3]
Tech giants press Congress to codify AI Safety Institute
A coalition of more than 60 tech companies and industry groups, including Amazon, Anthropic, Google, Meta, Microsoft and OpenAI, urged congressional leaders Tuesday to pass legislation permanently authorizing the U.S. Artificial Intelligence Safety Institute by the end of the year. In a letter led by the Information Technology Industry Council (ITI) and Americans for Responsible Innovation (ARI), the coalition called on Congress to prioritize legislation that would codify and provide resources for the institute located within the National Institutes of Standards and Technology (NIST). "There is a critical opportunity for the U.S. to lead multilateral efforts through its own AI Safety Institute -- or risk letting other countries write the rules for this powerful technology," the letter reads. "Codifying this work at NIST before the end of this year will help ensure that the U.S. maintains its influence in the development of science-backed standards and technical work that underpin our shared goal of responsible development of advanced AI systems," it added. The U.S. AI Safety Institute was established last year by the Commerce Department in response to President Biden's executive order on AI. It is meant to lead the government's efforts on AI safety and trust, particularly when it comes to evaluating advanced AI models. Both the House and Senate have advanced legislation out of committee that would support the effort to codify the U.S. AI Safety Institute. Tuesday's letter urged the two chambers to come together on final legislation by the end of the year. ARI President Brad Carson described such efforts as "an easy, bipartisan step forward Congress can take right now." "After all the footwork this Congress has done to educate members on AI issues, this is the single most important AI policy lawmakers can pass before the end of the year," he said in a statement.
[4]
The U.S. AI Safety Institute stands on shaky ground | TechCrunch
One of the only U.S. government offices dedicated to assessing AI safety is in danger of being dismantled if Congress doesn't choose to authorize it. The U.S. AI Safety Institute (AISI), a federal government body that studies risks in AI systems, was created last November as a part of President Joe Biden's AI Executive Order. The AISI operates within NIST, an agency of the Commerce Department that develops guidance for the deployment of various categories of technologies. But while the AISI has a budget, a director, and a research partnership with its counterpart in the U.K., the U.K. AI Safety Institute, it could be wound down with a simple repeal of President Biden's executive order. "If another president were to come into office and repeal the AI Executive Order, they would dismantle the AISI," Chris MacKenzie, senior director of communications at Americans for Responsible Innovation, an AI lobby group, told TechCrunch. "And [Donald] Trump has promised to repeal the AI Executive Order. So Congress formally authorizing the AI Safety Institute would ensure its continued existence regardless of who's in the White House." Beyond assuring the AISI's future, authorizing the office could also lead to more stable, long-term funding for its initiative from Congress. The AISI currently has a budget of around $10 million -- a relatively small amount considering the concentration of major AI labs in Silicon Valley. "Appropriators in Congress tend to give higher budgeting priority to entities formally authorized by Congress," MacKenzie said, "with the understanding that those entities have broad buy-in and are here for the long run, rather than just a single Administration's one-off priority." In a letter today, a coalition of over 60 companies, nonprofits, and universities called on Congress to enact legislation codifying the AISI before the end of the year. Among the undersigners are OpenAI and Anthropic, both of whom have signed agreements with the AISI to collaborate on AI research, testing, and evaluation. The Senate and House have each advanced bipartisan bills to authorize the activities of the AISI. But the proposals have faced some opposition from conservative lawmakers including Sen. Ted Cruz (R-Texas), who's called for the Senate version of the AISI bill to pull back on diversity programs. Granted, the AISI is a relatively weak organization from an enforcement perspective. Its standards are voluntary. But think tanks and industry coalitions -- as well as tech giants like Microsoft, Google, Amazon, and IBM, all of whom signed the aforementioned letter -- see the AISI as the most promising avenue to AI benchmarks that can form the basis of future policy. There's also concern among some interest groups that allowing the AISI to fold would risk ceding AI leadership to foreign nations. During an AI summit in Seoul in May 2024, international leaders agreed to form a network of AI Safety Institutes comprising agencies from Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union in addition to the U.K. and U.S. "As other governments quickly move ahead, members of Congress can ensure that the U.S. does not get left behind in the global AI race by permanently authorizing the AI Safety Institute and providing certainty for its critical role in advancing U.S. AI innovation and adoption," Jason Oxman, president and CEO of the Information Technology Industry Council, an IT industry trade association, said in a statement. "We urge Congress to heed today's call to action from industry, civil society, and academia to pass necessary bipartisan legislation before the end of the year."
Share
Share
Copy Link
A coalition of over 60 tech companies, nonprofits, and academic institutions are calling on Congress to pass legislation authorizing the U.S. AI Safety Institute within NIST before the end of 2024, citing concerns about national competitiveness and AI safety.
A coalition of over 60 technology companies, non-profit organizations, and academic institutions has issued an urgent appeal to the U.S. Congress, calling for the permanent authorization of the U.S. Artificial Intelligence Safety Institute (AISI) within the National Institute of Standards and Technology (NIST) before the end of 2024 123.
The coalition, led by Americans for Responsible Innovation (ARI) and the Information Technology Industry Council (ITI), includes major tech players such as Amazon, Google, Microsoft, Meta, OpenAI, and Anthropic. In an open letter, they emphasized the critical need for science-based standards to serve as guardrails for the rapidly evolving AI technology landscape 12.
Two bills have already been introduced in Congress:
Both bills aim to establish a NIST-run AI center focused on research, standards development, and public-private partnerships 1.
The AISI, created in November 2023 as part of President Biden's AI Executive Order, currently operates within NIST. However, its existence is precarious, as it could be dismantled with a simple repeal of the executive order 4. Congressional authorization would ensure the institute's continuity regardless of changes in administration.
Jason Oxman, ITI's president and CEO, stressed the urgency of the matter, stating, "As other governments quickly move ahead, Members of Congress can ensure that the US does not get left behind in the global AI race by permanently authorizing the AI Safety Institute" 13.
The push for AISI authorization comes amid growing international efforts to establish AI safety frameworks. During an AI summit in Seoul in May 2024, leaders agreed to form a network of AI Safety Institutes comprising agencies from various countries, including the UK, Japan, France, Germany, and the European Union 4.
Despite bipartisan support for the bills in both chambers, the legislation faces challenges:
It's important to note that the proposed AISI would focus on voluntary best practices rather than enforceable regulations. This approach differs from more stringent proposals like California's SB 1047, which was vetoed by Governor Gavin Newsom due to concerns about its potential impact on the state's economy 1.
Currently, the AISI operates with a budget of around $10 million. Congressional authorization could lead to more stable, long-term funding and potentially increase its budget 4. The coalition argues that this investment is crucial for maintaining U.S. leadership in AI development and safety standards.
As the year-end deadline approaches, the tech industry and its allies are intensifying their efforts to secure the AISI's future, viewing it as a critical step in shaping responsible AI development and maintaining U.S. competitiveness in the global AI landscape.
Reference
[1]
Reports of potential layoffs at the US AI Safety Institute have sparked alarm in the tech industry, raising questions about the future of AI regulation and safety measures in the United States.
4 Sources
4 Sources
Leading AI companies OpenAI and Anthropic have agreed to collaborate with the US AI Safety Institute to enhance AI safety and testing. This partnership aims to promote responsible AI development and address potential risks associated with advanced AI systems.
5 Sources
5 Sources
California's proposed AI safety bill, SB 1047, has ignited a fierce debate in the tech world. While some industry leaders support the legislation, others, including prominent AI researchers, argue it could stifle innovation and favor large tech companies.
3 Sources
3 Sources
Government officials and AI experts from multiple countries meet in San Francisco to discuss AI safety measures, while Trump's vow to repeal Biden's AI policies casts uncertainty over future regulations.
8 Sources
8 Sources
A proposed California bill aimed at regulating artificial intelligence has created a divide among tech companies in Silicon Valley. The legislation has garnered support from some firms while facing opposition from others, highlighting the complex challenges in AI governance.
4 Sources
4 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved