3 Sources
[1]
States take the lead in AI regulation as federal government steers clear
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025. Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI. Government use of AI The oversight and responsible use of AI are especially critical in the public sector. Predictive AI - AI that performs statistical analysis to make forecasts - has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole. But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases. Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment. Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them. Montana's new "Right to Compute" law sets requirements that AI developers adopt risk management frameworks - methods for addressing security and privacy in the development process - for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York's SB 8755 bill. AI in health care In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers' use of AI and clinicians' use of AI. Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose. Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology. Bills covering insurers provide oversight of the payers' use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients. Facial recognition and surveillance In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases. Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces. Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity. By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies. Generative AI and foundation models The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah's Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they're using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information. Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training. AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency. Trying to fill the gap In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers' compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections. Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says "The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations ... " The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration's definition of burdensome against needed federal funding for AI. Anjana Susarla is Professor of Information Systems at Michigan State University This article is republished from The Conversation under a Creative Commons license. Read the original article.
[2]
How states are placing guardrails around AI in the absence of strong federal regulation
Michigan State University provides funding as a founding partner of The Conversation US. U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025. Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI. Government use of AI The oversight and responsible use of AI are especially critical in the public sector. Predictive AI - AI that performs statistical analysis to make forecasts - has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole. But the widespread use of algorithmic decision-making could have major hidden costs. Potential algorithmic harms posed by AI systems used for government services include racial and gender biases. Recognizing the potential for algorithmic harms, state legislatures have introduced bills focused on public sector use of AI, with emphasis on transparency, consumer protections and recognizing risks of AI deployment. Several states have required AI developers to disclose risks posed by their systems. The Colorado Artificial Intelligence Act includes transparency and disclosure requirements for developers of AI systems involved in making consequential decisions, as well as for those who deploy them. Montana's new "Right to Compute" law sets requirements that AI developers adopt risk management frameworks - methods for addressing security and privacy in the development process - for AI systems involved in critical infrastructure. Some states have established bodies that provide oversight and regulatory authority, such as those specified in New York's SB 8755 bill. AI in health care In the first half of 2025, 34 states introduced over 250 AI-related health bills. The bills generally fall into four categories: disclosure requirements, consumer protection, insurers' use of AI and clinicians' use of AI. Bills about transparency define requirements for information that AI system developers and organizations that deploy the systems disclose. Consumer protection bills aim to keep AI systems from unfairly discriminating against some people, and ensure that users of the systems have a way to contest decisions made using the technology. Bills covering insurers provide oversight of the payers' use of AI to make decisions about health care approvals and payments. And bills about clinical uses of AI regulate use of the technology in diagnosing and treating patients. Facial recognition and surveillance In the U.S., a long-standing legal doctrine that applies to privacy protection issues, including facial surveillance, is to protect individual autonomy against interference from the government. In this context, facial recognition technologies pose significant privacy challenges as well as risks from potential biases. Facial recognition software, commonly used in predictive policing and national security, has exhibited biases against people of color and consequently is often considered a threat to civil liberties. A pathbreaking study by computer scientists Joy Buolamwini and Timnit Gebru found that facial recognition software poses significant challenges for Black people and other historically disadvantaged minorities. Facial recognition software was less likely to correctly identify darker faces. Bias also creeps into the data used to train these algorithms, for example when the composition of teams that guide the development of such facial recognition software lack diversity. By the end of 2024, 15 states in the U.S. had enacted laws to limit the potential harms from facial recognition. Some elements of state-level regulations are requirements on vendors to publish bias test reports and data management practices, as well as the need for human review in the use of these technologies. Generative AI and foundation models The widespread use of generative AI has also prompted concerns from lawmakers in many states. Utah's Artificial Intelligence Policy Act requires individuals and organizations to clearly disclose when they're using generative AI systems to interact with someone when that person asks if AI is being used, though the legislature subsequently narrowed the scope to interactions that could involve dispensing advice or collecting sensitive information. Last year, California passed AB 2013, a generative AI law that requires developers to post information on their websites about the data used to train their AI systems, including foundation models. Foundation models are any AI model that is trained on extremely large datasets and that can be adapted to a wide range of tasks without additional training. AI developers have typically not been forthcoming about the training data they use. Such legislation could help copyright owners of content used in training AI overcome the lack of transparency. Trying to fill the gap In the absence of a comprehensive federal legislative framework, states have tried to address the gap by moving forward with their own legislative efforts. While such a patchwork of laws may complicate AI developers' compliance efforts, I believe that states can provide important and needed oversight on privacy, civil rights and consumer protections. Meanwhile, the Trump administration announced its AI Action Plan on July 23, 2025. The plan says "The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations ... " The move could hinder state efforts to regulate AI if states have to weigh regulations that might run afoul of the administration's definition of burdensome against needed federal funding for AI.
[3]
4 ways states are placing guardrails around AI
U.S. state legislatures are where the action is for placing guardrails around artificial intelligence technologies, given the lack of meaningful federal regulation. The resounding defeat in Congress of a proposed moratorium on state-level AI regulation means states are free to continue filling the gap. Several states have already enacted legislation around the use of AI. All 50 states have introduced various AI-related legislation in 2025. Four aspects of AI in particular stand out from a regulatory perspective: government use of AI, AI in health care, facial recognition and generative AI. The oversight and responsible use of AI are especially critical in the public sector. Predictive AI -- AI that performs statistical analysis to make forecasts -- has transformed many governmental functions, from determining social services eligibility to making recommendations on criminal justice sentencing and parole.
Share
Copy Link
U.S. states are taking the initiative in regulating AI technologies across various sectors, including government use, healthcare, facial recognition, and generative AI, as federal regulation remains limited.
In the absence of comprehensive federal regulation, U.S. states are taking the lead in establishing guardrails around artificial intelligence (AI) technologies. The defeat of a proposed moratorium on state-level AI regulation in Congress has paved the way for states to continue their legislative efforts 12. As of 2025, all 50 states have introduced various AI-related bills, with several already enacting legislation 123.
Source: Fast Company
State legislatures are focusing on four primary aspects of AI regulation:
Government Use of AI: States are introducing bills to oversee the public sector's use of AI, particularly in predictive AI applications. These bills emphasize transparency, consumer protections, and risk assessment 12. For instance, the Colorado Artificial Intelligence Act mandates transparency and disclosure requirements for AI systems involved in consequential decisions 12.
AI in Healthcare: In the first half of 2025, 34 states introduced over 250 AI-related health bills 12. These bills address four main categories:
Facial Recognition and Surveillance: By the end of 2024, 15 states had enacted laws to limit potential harms from facial recognition technology 12. These regulations often require vendors to publish bias test reports and data management practices, and mandate human review in the use of these technologies 12.
Generative AI and Foundation Models: States are addressing concerns related to generative AI and foundation models. Utah's Artificial Intelligence Policy Act requires disclosure of AI use in certain interactions 12. California's AB 2013 mandates that developers post information about the data used to train their AI systems, including foundation models 12.
Source: The Conversation
While state-led regulation provides needed oversight on privacy, civil rights, and consumer protections, it also creates a patchwork of laws that may complicate compliance efforts for AI developers 12. However, many experts believe that states can play a crucial role in addressing the regulatory gap left by the federal government 12.
The Trump administration announced its AI Action Plan on July 23, 2025, which includes a controversial stance on state-level AI regulation. The plan states, "The Federal government should not allow AI-related Federal funding to be directed toward states with burdensome AI regulations" 1. This approach may create tension between federal and state-level AI governance efforts.
As AI technologies continue to evolve and permeate various sectors, the interplay between state and federal regulation will likely remain a critical issue in shaping the future of AI governance in the United States.
Cybersecurity researchers demonstrate a novel "promptware" attack that uses malicious Google Calendar invites to manipulate Gemini AI into controlling smart home devices, raising concerns about AI safety and real-world implications.
13 Sources
Technology
22 hrs ago
13 Sources
Technology
22 hrs ago
Google's search head Liz Reid responds to concerns about AI's impact on web traffic, asserting that AI features are driving more searches and higher quality clicks, despite conflicting third-party reports.
8 Sources
Technology
22 hrs ago
8 Sources
Technology
22 hrs ago
OpenAI has struck a deal with the US government to provide ChatGPT Enterprise to federal agencies for just $1 per agency for one year, marking a significant move in AI adoption within the government sector.
14 Sources
Technology
22 hrs ago
14 Sources
Technology
22 hrs ago
Microsoft announces the integration of OpenAI's newly released GPT-5 model across its Copilot ecosystem, including Microsoft 365, GitHub, and Azure AI. The update promises enhanced AI capabilities for users and developers.
3 Sources
Technology
6 hrs ago
3 Sources
Technology
6 hrs ago
Google has officially launched its AI coding agent Jules, powered by Gemini 2.5 Pro, offering asynchronous coding assistance with new features and tiered pricing plans.
10 Sources
Technology
22 hrs ago
10 Sources
Technology
22 hrs ago