5 Sources
[1]
How LLMs could supercharge mass surveillance in the US
The technology could make commercially available bulk datasets even more of a privacy concern. There are pieces of your life scattered all over the internet, and some of them are for sale. Data brokers amass web searches, financial records, and location data from millions of individuals and sell them to various clients, including the US government. Information on your recent online purchases or the route that you take to work could be sitting on hard drives around the world, waiting to be used. While reassembling those pieces isn't trivial, there is early evidence that LLMs might make it far easier. LLM agents could potentially do the work of intelligence analysts in a fraction of the time and for a fraction of the cost, which would enable the state to aim its all-seeing eye toward anyone, not just its highest-priority targets. "A lot of what we think of as privacy protection isn't so much like something that's written in the law," says Karen Levy, a professor of information science at Cornell University. "It just has to do with how hard or how expensive it is to learn stuff about people." When mobile phones became widespread, gathering data about people got much cheaper, but making use of that data remained difficult. Powerful LLMs could change that. Worries over how LLMs could facilitate mass surveillance recently made headlines around the world. According to reporting from the New York Times and the Atlantic, contract negotiations between Anthropic and the US Department of Defense fell apart in late February because Anthropic balked when the DOD demanded leeway to use the company's models to analyze commercially available data on US citizens. When Anthropic's rival OpenAI agreed to a DOD deal mere hours later, OpenAI faced an immediate wave of public backlash for apparently swanning past Anthropic's red lines. Under pressure, OpenAI and the DOD later revised the contract terms. For avid followers of Anthropic CEO Dario Amodei, the company's firm stance probably didn't come as a surprise. In a lengthy essay published to his personal website in January, Amodei had argued that AI-enabled mass surveillance could constitute a crime against humanity. The core concern underlying his dispute with the DOD was that the government might use LLM-based systems such as Claude to analyze reams of data obtained from brokers and build detailed profiles of individual Americans at scale. There's plenty of precedent for AI being used for mass surveillance: Most notably, governments worldwide use facial recognition to track citizens and noncitizens alike, and recent reporting indicates that US Immigrations and Customs Enforcement (ICE) agents have leaned heavily on facial recognition apps in order to carry out the Trump administration's mass deportation campaign. While there's not yet any smoking-gun evidence that the US government (or anyone else) is using LLMs to conduct surveillance in the way that Amodei warns about, there's a clear appetite for such capabilities. Artificial analysts The sort of surveillance against which Amodei cautions is only possible in the United States because of a legal loophole. If the police suspect you of a crime and want to peruse the location data stored on your phone to see if you were present at the scene, they need a signed warrant from a judge. That's because the Fourth Amendment of the Constitution protects anyone in the United States from "unreasonable searches" by the government. But when the government buys bulk data from brokers, it isn't itself searching -- it's taking advantage of searches conducted by the people who collected and compiled the data. That creates a paradox: The government can't look at the location information on your phone without a warrant, but if a dataset that the government has purchased contains your phone's location data, and the government is able to link it to you, then it can effectively perform an end run around the Fourth Amendment. The good news is that finding your information in these databases probably isn't as easy as just searching for your name. The data that brokers sell is often stripped of obvious identifiers -- it might contain, for example, location traces from millions of cell phones but not the corresponding phone numbers. But that's not an insurmountable obstacle. A 2019 New York Times investigation of bulk cell phone location data found that it was often possible to identify the owners of individual phones by making note of their apparent work and home locations. Even though deanonymizing data does take some effort, it's safely within the skill set of intelligence analysts, or indeed any competent internet user, and law enforcement has used ostensibly anonymized location data to tie people to specific crimes. But those are focused searches. While the government, or other organizations, might be able to access data that describes the locations of tens or hundreds of millions of Americans, it would take an utterly impractical number of human analysts to tie all of that data to specific individuals. AI agents could potentially do it faster and more cheaply. "One way to look at the kerfuffle with Anthropic is that the DOD wants to be able to exploit this [commercial data] loophole to the max," says Greg Nojeim, director of the security and surveillance project at the Center for Democracy & Technology. There's evidence indicating that LLM agents are up to the job. At the start of this year, Northeastern University professor Tianshi Li shared a particularly ironic example. Using an LLM agent, Li analyzed a publicly available Anthropic dataset that consisted of interviews with several scientists about how they use AI. Anthropic had redacted some personally identifying information in the scientists' responses, but the agent that Li used was able to connect descriptions of some of the subjects' research with specific studies that they had authored. Though the agent only managed to identify a fraction of the interviewed scientists, when it did succeed it was fast and cheap: Each attempt took about four minutes and cost less than fifty cents. Anthropic did not respond to a request for comment for this story. Other studies have shown that LLMs can match pseudonymous forum accounts to LinkedIn profiles; identify writers' native languages; isolate potentially identifying information from a user's online post history; and infer social media users' psychological traits, locations, incomes, sexes, and ages, among other attributes. While some of these tasks could be completed by the average person if given adequate time, others, such as native-language identification, would be challenging for anyone but an expert. "In practice, they're doing what a competent investigator would do," wrote Nico Dekens, senior vice president of engineering at the intelligence software company ShadowDragon, in an email to MIT Technology Review. All of these results suggest that agents could give an unskilled worker the capabilities of a team of highly trained intelligence analysts. "[An agent] can gather information on its own and it can make plans, so it's not like a static search query," Li says. "It both lowers the barrier to entry and maybe pushes the limits of surveillance even farther." If political leaders wanted to quash dissent or punish opponents at scale, the combination of bulk data and countless virtual analysts could enable them to do so. A team of LLM agents might be able to identify the real people behind social media accounts that express negative views about the government or leverage a location dataset to compile a list of people present at a protest. Those in power could then make their lives difficult. "If government agents want to harass people, there are many opportunities to do so," says Darrell West, a senior fellow at the Brookings Institution. In the United States, such harassment might take subtle forms -- being pulled aside for excessive screenings at the airport, for example. Elsewhere, the consequences could be more drastic. Members of China's Uighur ethnic minority, for example, have been extensively surveilled by their government for years, and police may choose to investigate individuals based on surveillance data. For Uighurs, such investigations can result in internment and forced labor. And China appears to be interested in integrating LLMs into its surveillance system: An unsecured dataset discovered last year on a Baidu server indicates that Chinese companies are using LLMs to flag online posts for the purpose of "public opinion monitoring," which is a priority for the Chinese government. All of this is made worse by the fact that LLMs make mistakes. The advantage of using LLMs for mass surveillance is that they can do far more work than human analysts far more quickly, but that also makes thoroughly checking their work impossible. And because mass surveillance is, by its very nature, secretive, some who fall victim to such errors may not have any recourse. Privacy on a precipice For now, these threats are theoretical. It's almost impossible to determine how the US intelligence community is using LLMs in any detail: While most government agencies are required to report how they use AI, intelligence agencies are exempt. And the companies that provide tools that the government might use to conduct surveillance are cagey about the details of their tech, at least in public materials. "There are legitimate reasons for secrecy about how informational assets are being obtained and used for intelligence or defense purposes," Nojeim says. "But the amount of secrecy that surrounds this use is particularly troubling because the tech is so powerful and so new and so difficult for Congress to oversee." That said, there are some suggestive signs about how the government might be using LLMs. For example, government agencies, including ICE and the Drug Enforcement Administration hold subscriptions to ShadowDragon's software, and, according to Dekens, the company is currently working on incorporating LLMs into the tools it offers. "LLM agents are already very good at the mechanical side of analysis," he wrote. "Right now, the most effective way to use them is as a copilot and workflow layer." And the government could certainly take advantage of those capabilities, as it has access not only to commercially available bulk data but also to proprietary datasets compiled by individual agencies. Historically, those datasets had been siloed in different agencies -- the Internal Revenue Service had your tax data and the Centers for Medicare & Medicaid Services had your health records, and they didn't share. Last year, however, the Elon Musk-led Department of Government Efficiency reportedly mounted a crusade to centralize all that data. With the data in one place and powerful AI tools at their fingertips, members of government agencies can, in principle, construct detailed profiles of anyone. DOGE's data-centralization efforts and the LLM-enabled acceleration of analyst work are two sides of the same coin. In principle, both changes could help the government operate more effectively and economically. But there's a cost. "I think there's sometimes an assumption that inefficiency is always bad," says Levy. "But in privacy, you actually really want things to be hard." Few organizations would choose inefficient procedures of their own volition, but Congress could force the government down that path. Shortly after the Anthropic debacle, a bipartisan group of senators and representatives introduced a bill that would require the government to obtain a warrant before purchasing data from data brokers. Public outcry, too, seems to have had an effect: After OpenAI was overwhelmed by opprobrium for accepting DOD contract terms that Anthropic had rejected, the company and the Pentagon modified the contract to include additional surveillance protections. But government surveillance is not the only concern. Private companies could just as easily purchase bulk data and analyze it with LLM agents, and they are less subject to legal constraints and public opposition, especially if they aren't household names. If it is indeed possible for LLM agents to build detailed profiles of large numbers of individuals using bulk data, companies could use those capabilities to investigate job applicants or determine whether someone is insurable. "It is very, very hard to hold to account companies that are doing whatever they want to with our data," Levy says. "It's hard to even know what's happening." In the absence of legislation preventing such uses, we might need to rethink how we understand our own privacy. It has always been possible that someone online might unearth your address or connect you with your pseudonymous accounts, but given the effort that would take, it was easy to feel safe. Even in the wake of Edward Snowden's 2013 revelations about the National Security Agency's extensive surveillance of US citizens, many people reassured themselves that their privacy was still intact because the government had no reason to look into their lives. That kind of privacy depends entirely on friction: the time and effort required to link a secret social media account with its real-life owner, or the skill and resources needed to analyze bulk datasets. Stay under the radar, and no one will care enough to overcome that friction. But LLM agents could lessen that effort, or remove it entirely. If the government and other organizations can construct detailed profiles of millions of people at the drop of a hat, no one is beneath their notice.
[2]
US government ramps up mass surveillance with help of AI tech, data brokers - and your apps and devices
On a Saturday morning, you head to the hardware store. Your neighbors' Ring cameras film your walk to the car. Your car's sensors, cameras and microphones record your speed, how you drive, where you're going, who's with you, what you say, and biological metrics such as facial expression, weight and heart rate. Your car may also collect text messages and contacts from your connected smartphone. Meanwhile, your phone continuously senses and records your communications, info about your health, what apps you're using, and tracks your location via cell towers, GPS satellites and Wi-Fi and Bluetooth. As you enter the store, its surveillance cameras identify your face and track your movements through the aisles. If you then use Apple or Google Pay to make your purchase, your phone tracks what you bought and how much you paid. All this data quickly becomes commercially available, bought and sold by data brokers. Aggregated and analyzed by artificial intelligence, the data reveals detailed, sensitive information about you that can be used to predict and manipulate your behavior, including what you buy, feel, think and do. Companies unilaterally collect data from most of your activities. This "surveillance capitalism" is often unrelated to the services device manufacturers, apps and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, "opting out" doesn't actually stop companies' data collection. While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans' sensitive data because the information it buys is not subject to the same restrictions as information it collects directly. The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels. As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing and advising about privacy and legal issues related to surveillance and data use. To understand the issues, it is critical to know how these technologies function, who collects what data about you, how that data can be used against you, and why the laws you might think are protecting your data do not apply or are ignored. Big money for AI-driven tech and more data Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion. Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope. DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents' phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur. DHS has also spent millions on AI-driven software used to detect sentiment and emotion in users' online posts. Have you been complaining about Immigration and Customs Enforcement policies online? If so, social media companies including Google, Reddit, Discord, and Facebook and Instagram owner Meta may have sent identifying data, such as your name, email address, phone number and activity, to DHS in response to hundreds of DHS subpoenas served on the companies. Meanwhile, the Trump administration's national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund "wider deployment of AI tools across American industry" and to allow industry and academia to use federal datasets to train AI. Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information. Blurring lines and little oversight In foreign intelligence work, the funding, development and controlled use of certain AI-driven gathering of data makes sense. The CIA's new acquisition framework to turbocharge collaboration with the private sector may be legal with proper oversight. But the line between collaborating for lawful national security purposes versus unlawful domestic spying is becoming dangerously blurred or ignored. For example, the Pentagon has declared a contractor, Anthropic, a national security risk because Anthropic insisted that its powerful agentic AI model, Claude, not be used for mass domestic surveillance of Americans or fully autonomous weapons. On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans' data from data brokers, including location histories, to track American citizens. As the federal government accelerates the use of and investment in AI-driven spy tech, it is mandating less oversight around AI technology. In addition to the national AI policy framework, which discourages state regulation of AI, the president has issued executive orders to accelerate federal government adoption of AI systems, remove state law AI regulation barriers and require that the federal government not procure the use of AI models that attempt to adjust for bias. But using advanced AI systems is risky, given reports of AI agents going rogue, exposing sensitive data and becoming a threat, even during routine tasks. Your data The surveillance capitalism system requires people to unwittingly participate in a manipulative cycle of group- and self-surveillance. Neighborhood doorbell cameras, Flock license plate readers and hyperlocal social media sites like Nextdoor create a crowdsourced record of all people's movements in public spaces. Sensors in phones and wearable devices, such as earbuds and rings, collect ever more sensitive details. These include health data, including your heart rate and heart rate variability, blood oxygen, sweat and stress levels, behavioral patterns, neurological changes and even brain waves. Smartphones can be used to diagnose, assess and treat Parkinson's disease. Earbuds could be used to monitor brain health. This data is not protected under HIPAA, which prohibits health care providers and those working with them from disclosing your health information without your permission, because the law does not consider tech companies to be health care providers nor these wearables to be medical devices. Legal protections People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This "consent" allows their data to end up in the largely unregulated commercial data market. The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach. The Fourth Amendment prohibits unreasonable search and seizure by the government. Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act's Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications. Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent. In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans' privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans' data privacy and protects them from AI harms. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.
[3]
How everything you do is being monitored in an AI-fuelled 'surveillance capitalism system' that's ramping up aggressively
Personal data ranging from your health information to your location is being hoovered up by the government. On a Saturday morning, you head to the hardware store. Your neighbors' Ring cameras film your walk to the car. Your car's sensors, cameras and microphones record your speed, how you drive, where you're going, who's with you, what you say, and biological metrics such as facial expression, weight and heart rate. Your car may also collect text messages and contacts from your connected smartphone. Meanwhile, your phone continuously senses and records your communications, info about your health, what apps you're using, and tracks your location via cell towers, GPS satellites and Wi-Fi and Bluetooth. As you enter the store, its surveillance cameras identify your face and track your movements through the aisles. If you then use Apple or Google Pay to make your purchase, your phone tracks what you bought and how much you paid. All this data quickly becomes commercially available, bought and sold by data brokers. Aggregated and analyzed by artificial intelligence, the data reveals detailed, sensitive information about you that can be used to predict and manipulate your behavior, including what you buy, feel, think and do. Companies unilaterally collect data from most of your activities. This "surveillance capitalism" is often unrelated to the services device manufacturers, apps and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, "opting out: doesn't actually stop companies' data collection. While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans' sensitive data because the information it buys is not subject to the same restrictions as information it collects directly. The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels. As a privacy, electronic surveillance and tech law attorney, author and legal educator, I have spent years researching, writing and advising about privacy and legal issues related to surveillance and data use. To understand the issues, it is critical to know how these technologies function, who collects what data about you, how that data can be used against you, and why the laws you might think are protecting your data do not apply or are ignored. Big money for AI-driven tech and more data Congressional funding is supercharging huge government investments in surveillance tech and data analytics driven by AI, which automates analysis of very large amounts of data. The massive 2025 tax-and-spending law netted the Department of Homeland Security an unprecedented US$165 billion in yearly funding. Immigration and Customs Enforcement, part of DHS, got about $86 billion. Disclosure of documents allegedly hacked from Homeland Security reveal a massive surveillance web that has all Americans in its scope. DHS is expanding its AI surveillance capabilities with a surge in contracts to private companies. It is reportedly funding companies that provide more AI-automated surveillance in airports; adapters to convert agents' phones into biometric scanners; and an AI platform that acquires all 911 call center data to build geospatial heat maps to predict incident trends. Predicting incident trends can be a form of predictive policing, which uses data to anticipate where, when and how crime may occur. DHS has also spent millions on AI-driven software used to detect sentiment and emotion in users' online posts. Have you been complaining about Immigration and Customs Enforcement policies online? If so, social media companies including Google, Reddit, Discord, and Facebook and Instagram owner Meta may have sent identifying data, such as your name, email address, phone number and activity, to DHS in response to hundreds of DHS subpoenas served on the companies. Meanwhile, the Trump administration's national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to use grants and tax incentives to fund "wider deployment of AI tools across American industry" and to allow industry and academia to use federal datasets to train AI. Using federal datasets this way raises privacy law concerns because they contain a lifetime of sensitive details about you, including biographical, employment and tax information. Blurring lines and little oversight In foreign intelligence work, the funding, development and controlled use of certain AI-driven gathering of data makes sense. The CIA's new acquisition framework to turbocharge collaboration with the private sector may be legal with proper oversight. But the line between collaborating for lawful national security purposes versus unlawful domestic spying is becoming dangerously blurred or ignored. For example, the Pentagon has declared a contractor, Anthropic, a national security risk because Anthropic insisted that its powerful agentic AI model, Claude, not be used for mass domestic surveillance of Americans or fully autonomous weapons. On March 18, 2026, FBI Director Kash Patel confirmed to Congress that the FBI is buying Americans' data from data brokers, including location histories, to track American citizens. As the federal government accelerates the use of and investment in AI-driven spy tech, it is mandating less oversight around AI technology. In addition to the national AI policy framework, which discourages state regulation of AI, the president has issued executive orders to accelerate federal government adoption of AI systems, remove state law AI regulation barriers and require that the federal government not procure the use of AI models that attempt to adjust for bias. But using advanced AI systems is risky, given reports of AI agents going rogue, exposing sensitive data and becoming a threat, even during routine tasks. Your data The surveillance capitalism system requires people to unwittingly participate in a manipulative cycle of group- and self-surveillance. Neighborhood doorbell cameras, Flock license plate readers and hyperlocal social media sites like Nextdoor create a crowdsourced record of all people's movements in public spaces. Sensors in phones and wearable devices, such as earbuds and rings, collect ever more sensitive details. These include health data, including your heart rate and heart rate variability, blood oxygen, sweat and stress levels, behavioral patterns, neurological changes and even brain waves. Smartphones can be used to diagnose, assess and treat Parkinson's disease. Earbuds could be used to monitor brain health. This data is not protected under HIPAA, which prohibits health care providers and those working with them from disclosing your health information without your permission, because the law does not consider tech companies to be health care providers nor these wearables to be medical devices. Legal protections People have little choice when buying devices, using apps or opening accounts but to agree to lengthy terms that include consent for companies to collect and sell their personal data. This "consent" allows their data to end up in the largely unregulated commercial data market. The government claims it can lawfully purchase this data from data brokers. But in buying your data in bulk on the commercial market, the government is circumventing the Constitution, Supreme Court decisions and federal laws designed to protect your privacy from unwarranted government overreach. The Fourth Amendment prohibits unreasonable search and seizure by the government. Supreme Court cases require police to get a warrant to search a phone or use cellular or GPS location information to track someone. The Electronic Communications Privacy Act's Wiretap Act prohibits unauthorized interception of wire, oral and electronic communications. Despite some efforts, Congress has failed to enact legislation to protect data privacy, the use of sensitive data by AI systems or to restore the intent of the Electronic Communications Privacy Act. Courts have allowed the broad electronic privacy protections in the federal Wiretap Act to be eviscerated by companies claiming consent. In my opinion, the way to begin to address these problems is to restore the Wiretap Act and related laws to their intended purposes of protecting Americans' privacy in communications, and for Congress to follow through on its promises and efforts by passing legislation that secures Americans' data privacy and protects them from AI harms. This edited article is republished from The Conversation under a Creative Commons license. Read the original article. This article is part of a series on data privacy that explores who collects your data, what and how they collect, who sells and buys your data, what they all do with it, and what you can do about it.
[4]
AI is making it very easy for the government to spy on you. Some lawmakers are worried.
Lawmakers are leery that AI will give old-fashioned snooping a dangerous new edge. The long-running fight to rein in the government's power to search Americans' phone calls, emails and text messages without a warrant has gained new urgency on Capitol Hill over concerns that AI will supercharge state surveillance. Lawmakers are currently jockeying over reforms to a key law that enables warrantless monitoring of Americans' communications, with privacy advocates and national security hawks warning that AI will allow faster and more invasive analysis of vast amounts of information -- including communications swept up in foreign intelligence programs and commercially available location or behavioral data. "Imagine instead of doing a query with one person that you turned AI loose on these databases," Rep. Thomas Massie, R-Ky., said Thursday at a press conference announcing a new bill to close data-collection loopholes. "There's virtually nothing the government can't know about you." Section 702 of the Foreign Intelligence Surveillance Act (FISA) allows the government to collect the communications of foreigners abroad, but it also enables the government to collect messages, emails and other transmissions from Americans when they contact foreigners. The government can then perform warrantless searches on those emails, messages and other communications. Though the provision was originally passed in 2008, lawmakers must renew it every few years. A bipartisan coalition of lawmakers has emerged in recent weeks to tackle concerns about AI's ability to search through the mountains of data procured through Section 702. In March, Rep. Warren Davidson, R-Ohio, and co-sponsors in the House and Senate introduced a sweeping FISA reform bill. "For years, there have been jaw-dropping abuses of section 702," Sen. Ron Wyden, D-Ore., a co-sponsor of the Government Surveillance Reform Act, said on the Senate floor last week. "Government officials have searched through 702 data to find Black Lives Matter protesters, political campaign donors, elected officials, even a state judge who complained about police abuses." America's law enforcement agencies should be able to harness technology responsibly, Wyden said, "but new tools require new rules. Without new rules, you can count on the executive branch to run roughshod over Americans' privacy rights and constitutional freedoms." While the FISA renewal process is often fraught, as opposing sides weigh the trade-offs between surveillance and security, this year's fight has been particularly acrimonious. Section 702 was set to expire on Monday, but lawmakers agreed to a 10-day extension to provide more time to debate new protections and safeguards. The White House has pushed congressional Republicans to pass an extension of Section 702 without any changes. In a statement, a White House spokesperson told NBC News: "The Administration continues to have positive conversations and remains open to proposals that Congress can reach consensus on that would reauthorize FISA." On Thursday afternoon, House Speaker Mike Johnson, R-La., introduced a new version of the spy law that would extend Section 702 for three years. While the new bill added some safeguards, the text did not add a requirement for search warrants sought by some Republicans. In a statement to NBC News, Wyden said the latest draft was window dressing for the same hollow privacy guarantees: "The latest House FISA bill is a rubber stamp for [President Donald] Trump and [FBI Director] Kash Patel to spy on Americans without a warrant. Don't fall for fake reforms." Thursday's draft follows a dramatic midnight mutiny last Friday from a group of 20 House Republicans, many of whom belong to the conservative House Freedom Caucus. Johnson had called a vote on a longer, five-year extension for Section 702 that was quickly beaten back. A final vote at 2:07 a.m. on reauthorizing the legislation for 18 months also failed, leading Johnson to agree to the 10-day extension while members hash out a new version. Even some Democrats who had previously voted in favor of Section 702 in 2024 are now refusing to reauthorize the law without meaningful amendments. "We must reform FISA to protect our privacy and civil liberties and ensure that Section 702 will not be used to spy illegally on Americans," said Rep. Jamie Raskin, D-Md., in a hearing last week. Like others, Raskin highlighted the Trump administration's hollowing out of existing oversight mechanisms, like the Privacy and Civil Liberties Oversight Board, as reasons to ensure stronger safeguards. "Times have changed since 2024. The watchdogs are gone," said Raskin. "Those reforms now depend on Trump administration officials to respect the law, which I am afraid is oxymoronic, if not just moronic." He also noted that many surveillance activities allowed by Section 702 will already continue through March 2027 due to a legislative stipulation extending the authority for months if Congress cannot agree on a longer-term reauthorization. Privacy advocates have long sought to require warrants for searches of Americans' data swept into the databases powered by Section 702 and curated by data brokers. At the same time, many national security proponents and experts in the intelligence community argue that such restrictions would impede law enforcement efforts and pose severe national security risks. The CIA and other intelligence agencies have also weighed in on the Section 702 debate, highlighting the authority's importance to American security efforts. "To be clear, the US Government cannot use Section 702 to target Americans' electronic communications for Collection," a CIA handout says, adding that the law helped prevent a terror attack at a Taylor Swift concert in Austria. "Section 702 is the most extensively overseen US intelligence collection tool, with built-in protections for Americans' privacy and civil liberties." However, civil liberties advocates note that Americans' data is often collected even when they are not explicitly targeted and that agencies then run searches on Americans once this data is obtained. "Section 702 is so vast that it incidentally collects Americans' information," said Jason Pye, vice president of the Due Process Institute, a bipartisan nonprofit that advocates for fairness in the legal system. "The FBI can then search for a person, for an American, without a warrant. That's what we're trying to solve." Alongside the sharp exchanges about Section 702, lawmakers are also debating whether to introduce new restrictions on the government's ability to purchase data from third-party data brokers. These brokers collect and curate commercially available data on Americans gleaned from advertisements and other tracking technologies, along with information from public records. Brokers sell their data to paying customers -- including government agencies -- who can then search these databases to track Americans' precise locations, internet browsing activity, travel history, known associates and family members, and even purchase history and transaction patterns. The directors of the National Security Agency and the FBI have acknowledged that the agencies buy data on Americans from third-party brokers to use in their investigations. Yet experts say that the rise of AI could allow government agencies to conduct more -- and more accurate -- searches of commercial data and information contained in Section 702 databases. "The technology allows basically a panopticon," said Brendan Steinhauser, CEO of the nonprofit Alliance for Secure AI, which aims to educate Americans about risks from AI, and a leading conservative voice on the technology. "You can just have AI finding the patterns, aggregating data and allowing the government to build this enormous surveillance state that threatens our civil liberties." In late March, Wyden sent a letter to America's leading AI companies to understand whether they would allow the government to use their technology to surveil Americans, including through the collection of bulk commercial data or intelligence data that might inadvertently include Americans' information. Wyden's office said only Anthropic and Google replied, with no reply from OpenAI or xAI. The companies' replies, shared exclusively with NBC News, note the lawmaker's concerns but largely avoid details about how the companies allow government users to analyze foreign intelligence data. "We recognize that complex challenges can be posed by the intersection of rapidly advancing AI and government operations," wrote Anne Wall, Google's head of U.S. federal government affairs and public policy. "As we navigate this landscape, our teams maintain a deep respect for the privacy and civil liberties of individuals." In the response from Anthropic, the company's head of North America government affairs, Brian Peters, said it was committed to protecting civil liberties and had designed its usage policy to ban "unauthorized surveillance or tracking of individuals." Peters said Anthropic barred "analysis of the product of bulk domestic collection," appearing to reference the practices of commercial data brokers. However, referring to Wyden's Section 702 concerns, Peters said Anthropic had granted an exception "to a small number of national-security customers, permitting the use of our models for foreign intelligence analysis in accordance with applicable law." Peters said that Anthropic's AI systems could be used to analyze this foreign intelligence information, even if it "includes incidentally collected U.S.-person information." Anthropic, developer of the popular Claude family of AI models, made a public stand earlier this year after expressing concerns about how the Pentagon would use its systems, particularly regarding the use of AI for domestic mass surveillance. "We support the use of AI for lawful foreign intelligence and counterintelligence missions," Anthropic CEO Dario Amodei wrote in a statement in late February. "But using these systems for mass domestic surveillance is incompatible with democratic values. AI-driven mass surveillance presents serious, novel risks to our fundamental liberties." Pye, of the Due Process Institute, said Americans across the political spectrum should realize the power of AI-fueled surveillance. "Some of these AI systems, with the data that's available, they can essentially track where you're coming, where you're going, where you work, how much you earn, who you know, political affiliations, Facebook pages, Twitter accounts," Pye added. "I think this is really concerning, particularly in this very heightened, very polarized, hyperpartisan political atmosphere."
[5]
How the government is ramping up mass surveillance with AI-driven tech
Companies unilaterally collect data from most of your activities. This "surveillance capitalism" is often unrelated to the services device manufacturers, apps, and stores are providing you. For example, Tinder is planning to use AI to scan your entire camera roll. And despite their promises, "opting out" doesn't actually stop companies' data collection. While companies can manipulate you, they cannot put you in jail. But the U.S. government can, and it now purchases massive quantities of your information from commercial data brokers. The government is able to purchase Americans' sensitive data because the information it buys is not subject to the same restrictions as information it collects directly. The federal government is also ramping up its abilities to directly collect data through partnerships with private tech companies. These surveillance tech partnerships are becoming entrenched, domestically and abroad, as advances in AI take surveillance to unprecedented levels.
Share
Copy Link
The US government is rapidly expanding AI surveillance capabilities through data broker purchases and AI-powered surveillance technologies. Department of Homeland Security received $165 billion in 2025 funding to deploy Large Language Models (LLMs) and sentiment analysis tools that can process commercially available data on citizens, raising concerns about circumventing Fourth Amendment protections as lawmakers debate stricter privacy safeguards.
The convergence of artificial intelligence and mass surveillance is reshaping how the US government monitors its citizens. Large Language Models (LLMs) are emerging as powerful tools that could enable government agencies to analyze commercially available data at unprecedented scale and speed
1
. According to Karen Levy, a professor of information science at Cornell University, privacy protection often depends less on legal restrictions and more on "how hard or how expensive it is to learn stuff about people"1
. AI-powered surveillance technologies are rapidly eliminating those practical barriers.
Source: Fast Company
The issue gained public attention when contract negotiations between Anthropic and the US Department of Defense collapsed in late February over concerns about using AI models to analyze commercially available data on US citizens
1
. When OpenAI agreed to a similar deal hours later, the company faced immediate backlash before revising contract terms under pressure. Anthropic CEO Dario Amodei had previously argued in a January essay that AI-enabled mass surveillance could constitute a crime against humanity, highlighting the stakes of this technology1
.The US government now purchases massive quantities of personal information from data brokers, exploiting a significant gap in Fourth Amendment protections
2
. While police need a warrant to access location data directly from your phone, government purchasing personal data from commercial brokers allows agencies to bypass this requirement entirely1
. This creates a paradox where the government cannot directly search your device without judicial approval, but can access the same information through third-party purchases.Data brokers compile web searches, financial records, location data, and behavioral information from millions of individuals
1
. Your car's sensors record your speed, driving patterns, conversations, and even biological metrics such as facial expression and heart rate2
3
. Meanwhile, your smartphone continuously tracks communications, health information, and location through cell towers, GPS satellites, and Wi-Fi connections3
. This AI-fuelled surveillance capitalism system generates detailed profiles that reveal what you buy, feel, think, and do3
.
Source: NBC
Congressional funding is supercharging government surveillance infrastructure. The massive 2025 tax-and-spending law provided the Department of Homeland Security with an unprecedented $165 billion in yearly funding, with Immigration and Customs Enforcement receiving approximately $86 billion
2
3
. This funding enables analysis of vast amounts of data through AI automation that would be impossible for human analysts alone.
Source: The Conversation
DHS is expanding contracts with private companies to deploy AI-automated surveillance in airports, adapters converting agents' phones into biometric scanners, and platforms acquiring all 911 call center data to build geospatial heat maps for predictive policing
2
3
. The department has spent millions on AI-driven sentiment analysis software to detect emotion in users' online posts2
. Social media companies including Google, Reddit, Discord, and Meta have reportedly sent identifying data to DHS in response to hundreds of subpoenas2
.Related Stories
The fight over Section 702 of the Foreign Intelligence Surveillance Act (FISA) has intensified as lawmakers grapple with AI's implications for spying on citizens
4
. Section 702 allows the government to collect communications of foreigners abroad, but also sweeps up messages and emails from Americans contacting those foreigners, enabling warrantless searches of that data4
.Rep. Thomas Massie warned at a press conference: "Imagine instead of doing a query with one person that you turned AI loose on these databases. There's virtually nothing the government can't know about you"
4
. Sen. Ron Wyden highlighted abuses including searches for Black Lives Matter protesters, political campaign donors, and elected officials, stating that "new tools require new rules"4
. A bipartisan coalition introduced the Government Surveillance Reform Act in March, though Congress remains divided on implementation.Rep. Jamie Raskin noted that the Trump administration's hollowing out of oversight mechanisms like the Privacy and Civil Liberties Oversight Board makes stricter privacy safeguards more urgent, observing that "the watchdogs are gone"
4
. The Trump administration's national policy framework for artificial intelligence, released on March 20, 2026, urges Congress to fund wider deployment of AI tools and allow industry to use federal datasets containing biographical, employment, and tax information to train AI models2
. These developments signal that AI surveillance will remain a central battleground for civil liberties as technology outpaces existing legal frameworks designed to protect data privacy.Summarized by
Navi
[1]
[2]
[3]
13 Jun 2025•Technology

27 Mar 2026•Policy and Regulation

07 Mar 2025•Policy and Regulation

1
Technology

2
Technology

3
Technology
