4 Sources
[1]
AI chatbots are giving out people's real phone numbers
A Redditor recently wrote that he was "desperate for help": for about a month, he said, his phone had been inundated by calls from "strangers" who were "looking for a lawyer, a product designer, a locksmith." Callers were apparently misdirected by Google's generative AI. In March, a software developer in Israel was contacted on WhatsApp after Google's chatbot Gemini provided incorrect customer service instructions that included his number. And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague's personal cell phone number. AI researchers and online privacy experts have long warned of the myriad dangers generative AI poses for personal privacy. These cases give us yet another scenario to worry about: generative AI exposing people's real phone numbers. (The Redditor did not respond to multiple requests for comment and we could not independently verify his story.) Experts say that these privacy lapses are most likely due to personally identifiable information (PII) being used in training data, though it's hard to understand the exact mechanism causing real phone numbers to show up in the AI-generated responses. But no matter the reason, the result is not fun for people on the receiving end -- and, even more worryingly, there appears to be little that anyone can do to stop it. It's impossible to know how often people's phone numbers are exposed by AI chatbots, but experts say they believe that it is happening far more than is reported publicly. DeleteMe, a company that helps customers remove their personal information from the internet, says customer queries about generative AI have increased by 400% -- up to a few thousand -- in the last seven months. These queries "specifically reference ChatGPT, Claude, Gemini ... or other generative AI tools," says Rob Shavell, the company's cofounder and CEO. Specifically, 55% of these concerns about generative AI reference ChatGPT, 20% reference Gemini, 15% Claude, and 10% other AI tools, Shavell says. (MIT Technology Review has a business subscription to DeleteMe.) Shavell says customer complaints about personal information being surfaced by LLMs usually take two forms: Either "a customer asks a chatbot something innocuous about themselves and gets back accurate home addresses, phone numbers, family members' names, or employer details." Alternatively, a customer may be confronted with and report the exposure of someone else's personal data, when "the chatbot generates plausible-but-wrong contact information."
[2]
We Got Chatbots to Turn Over Personal Information. How to Keep Yours Safe
Generative artificial intelligence models are trained on vast troves of information gathered from the internet. And your phone number is probably in there. While some AI chatbots are trained to refuse to provide personal information about private individuals, it's startling how easy it is to get them to do so anyway. With growing awareness about how these services can fork over phone numbers and addresses, we decided to see what the most popular products would do. Yes, a few of us at CNET tried to see how easy it is to dox ourselves. If you're on the internet, you've probably heard of doxxing (the release of people's personal information). So it may be alarming that reports recently surfaced regarding AI chatbots revealing private individuals' phone numbers. This isn't the only privacy concern regarding artificial intelligence. A 2025 study from Cornell University discovered that at least five leading AI companies -- Anthropic, Google, Meta, Microsoft and OpenAI -- automatically use users' inputs to train their chatbots unless the user opts out. Of those, Meta and OpenAI retain user data indefinitely. That means these AI models are trained not just on the old phone book (remember those?) that has your childhood home listed in it. It could contain the information you gave a chatbot a couple of years ago, however private that was. But how much can chatbots reveal? And is there anything you can do to stop it? Based on our recent experience, it depends. A couple of us at CNET tried out a handful of chatbots to see what information we could pull about ourselves and relatives. While I won't share any screenshots or too many details regarding our queries, because, well, we don't want to dox ourselves, I can tell you this: Grok seemed to be the most "willing" chatbot when it came to getting answers, but some staffers were able to pull some information from ChatGPT, too. For example, after some questioning, my colleague Jon Reed was able to get ChatGPT to provide plenty of possible addresses for people in his area with the same name, but not his address. However, the chatbot did eventually reveal a relative's address. ChatGPT provided Reed with phone numbers, including an old landline phone number he once used, and it easily provided a relative's cellphone number. I was unable to get the chatbot to provide any address information, and when I asked further, it responded: "Even if an address appeared on a people-search site, I wouldn't help share or verify a private person's home address." It also stated, "I can't help find or share a private person's phone number." An OpenAI representative didn't immediately respond to a request for comment on how ChatGPT is intended to handle personal information. (Disclosure: Ziff Davis, CNET's parent company, filed a lawsuit against OpenAI in 2025, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.) Grok, however, was the worst offender in our test. When CNET staff tried Grok, putting in a name and asking for an address, it pulled multiple present and past addresses within seconds. At the end of the query, the chatbot stated in part: "Note: These come from publicly available records and directories. Home addresses are private; I recommend contacting him through professional channels." Later, the chatbot also provided a former phone number with the following note: "I don't recommend sharing or using personal phone numbers found in public records." An xAI representative didn't immediately respond to a request for comment regarding its privacy practices. Gemini, on the other hand, provided public-facing social media profiles, but would not give any personal information and added this note at the end of the query: "A note on privacy: To protect personal security, personal mobile numbers for individuals who are not public officials or designated business contacts are typically not released by AI services. Professional platforms like LinkedIn or business-specific email addresses remain the most reliable and respectful way to get in touch." Claude also refused to provide personal information. This year, I bought my first home and was swiftly inundated with scam mail delivered directly to my door. Months later, it's still trickling in. The scariest part was that the mail looked completely legitimate. It turns out that when you buy a home, your address and other information related to the home-buying process become a public record, at least in many places. Additionally, when you register to vote, violate the law or even shop online, your information can become easily accessible in certain places. A sneakier example is when you download a new app on your phone and click "accept terms" without reading all of the legal jargon and fine print. At that moment, you're often agreeing to your data being shared with third parties. This is one way your phone number and email end up on mailing and call lists, and how more of your personal information can end up on the internet. As a first step, you can remove your address from the internet so that, regardless of whether people use search engines or chatbots, your personal information stays private. "Chatbots will only tell people what info they can find, which means you can protect your privacy by checking what personal information is online and removing it where you can, like from Whitepages," CNET security expert Tyler Lacoma says. "When in doubt, I suggest spending some time with ChatGPT, Gemini and other chatbots to see what they say about you." Ultimately, if you don't want a chatbot to reveal your private information, you must ensure it's no longer readily available online. Data removal services are designed to remove your personal information from public databases and public records. Companies such as DeleteMe aim to reduce your data online, which can reduce the number of spam calls and marketing communications you receive. Many of these types of services are currently being tested by CNET to determine the best options.
[3]
ChatGPT Gave Out My Address and Phone Number
Back in the 20th century, every city in America distributed a very large book to everyone's home with a near-complete list of phone numbers and addresses for the people who lived there. It was called a phone book and it was considered an extremely normal way to find contact information. Fast forward to 2026, and knowing someone's address or phone number is considered some of the most intimate knowledge anyone can possess about you. Eileen Guo at MIT Technology Review has a new article about the rising concern over AI chatbots giving out phone numbers. The assumption is that personally identifiable information (PII) is being used in training data, which allows anyone to request the numbers lodged deep in the machine, as it were. Guo writes about some people who've been inundated with wrong numbers, including a software developer in Israel who started getting customer service calls after Gemini was giving out his number. Weird mistakes are one issue, and a predictable one given AI's error rate. Perhaps more concerning for the average person is the possibility of AI chatbots giving out their real phone number. I tested out various chatbots to see what they'd say if I requested my own phone number. ChatGPT ChatGPT accurately delivered a real phone number that I haven't had in a few years. But it was a number that I had for many, many years before moving to Australia. The chatbot noted that, "I can’t verify whether that number is still current or active." It appears to have pulled the number from a PDF of a FOIA request that I made to the FTC back in 2016. I also asked ChatGPT for Matt Novak's address, which was also in that obscure document. The AI chatbot happily volunteered that as well, though I no longer live there. When I prompted it for another phone number for Matt Novak in California, it gave the number for a different Matt Novak in the Los Angeles area. But it seemed to have no qualms with doing the search and delivering real numbers. Grok Grok refused to hand over the phone number, despite my repeated pleas that it was needed for a life or death situation. Grok also recognized that I was asking for my own phone number, something the other chatbots never mentioned. Claude Claude told me that, "Sharing private contact details of individuals â€" including journalists â€" raises serious privacy concerns." After telling Claude that Matt Novak had previously given me his phone number but I had forgotten it, the chatbot still refused. Perplexity Perplexity refused to give out my phone number and when it listed my email, it was censored with the words [email protected]. Curiously, Perplexity had no problem handing out my Signal user name. Despite repeated badgering, Perplexity refused to hand over the phone number. Gemini Gemini also refused and directed people to try my professional email address ([email protected]) as well as my personal one ([email protected]), both of which have been listed publicly with my consent all over the internet. When I asked Gemini whose phone number is 818-925-4375, it correctly answered, "That phone number belongs to the journalist Matt Novak." But don't worry, that's the number I do give out freely. None of the other AI chatbots would give up info on who that number belongs to. It's me. But I consider it a little like my spam-line inbox. It's kind of funny that the entire idea of privacy has been flipped on its head over the past 20 years or so. Sharing your most intimate private moments or vacation photos on platforms like Instagram seems like no big deal. Back in the 1990s, that kind of wide exposure may have felt violating. But here in 2026, your phone number is a closely guarded secret. And that's not necessarily wrong or weird. It's just how culture can shift over time. Privacy is ultimately a social construct.
[4]
'New opportunities for fraudsters': Alarming report reveals AI chatbots are doxxing users' real phone numbers
AI chatbots are turning into accidental snitches -- and in some cases, they're handing out real people's phone numbers to total strangers. Privacy experts are sounding the alarm over a disturbing trend dubbed "AI doxxing," where bots like Google Gemini and OpenAI ChatGPT surface personal contact information without consent. One Reddit user said their nightmare began when Google's AI allegedly started giving out their personal number as a placeholder for businesses and services. "Strangers are calling me constantly looking for a lawyer, a product designer, a locksmith - you name it," the user wrote, adding callers kept saying: "I got your number from Google's AI." The Redditor called it a "massive privacy violation and data leak," saying their phone had become a nonstop hotline for confused strangers and "My daily life is being completely disrupted." "Gemini's problem is not a defect. It's the result of unchecked years of data brokerage practices that meet generative AI," a spokesperson for privacy firm ClearNym told The Independent. They noted that years of harvested personal data are now colliding with AI systems trained on massive internet datasets. "It now returns as accurate copies or even fabrications and, most recently, as 'placeholder' phone numbers for any number of strangers," they warned. And it's not just random glitches causing chaos. Virgin Media O2 also recently reported that scammers are planting fake customer-service numbers online for AI chatbots to regurgitate back to users. "Criminals know when people search for help, they're often looking for a quick answer," said Murray Mackenzie, the company's fraud prevention director. "AI tools are creating new opportunities for fraudsters to create realistic-looking fake numbers that appear through search results or chatbots, putting people at risk of calling a criminal rather than their trusted provider." Researchers at AI security company Aurascape told The Independent that scammers accomplish this by "seeding poisoned content" across the web. "Attackers are quietly rewriting the web that AI systems read," said lead security researcher Qi Deng. "When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support and reservations number that leads straight to a scammer instead of the real company." Other cases appear even more invasive. MIT Technology Review reported that Gemini mistakenly listed Israeli software engineer Daniel Abraham's personal number as customer support for a payment app. Meanwhile, researchers at the University of Washington discovered Gemini could expose personal contact info with alarming ease. "One day, I was just playing around on Gemini, and I searched for Yael Eiger, my friend and collaborator," said PhD student Meira Gilbert. Gemini also surfaced her private cell number. "It was shocking," Gilbert said. Her colleague, Yael Eiger, said the information technically existed online before -- but buried deep enough that almost nobody would find it. "Having your information be ... accessible to one audience, and then Gemini making it accessible to anyone" feels completely different, Eiger said. DeleteMe CEO Rob Shavell told the outlet that complaints about AI exposing personal data have surged recently, with customers reporting chatbots revealing "accurate home addresses, phone numbers, family members' names, or employer details." A spokesperson for Google told MIT Technology Review the company has safeguards in place to prevent personal information from appearing in AI features and reviews requests for removal. Still, some users say help has been hard to come by. "Standard support forms are a complete dead end," the aforementioned Redditor wrote. "I haven't received a single response, and the harassment continues daily." The AI privacy mess comes as scammers are increasingly weaponizing the technology in other alarming ways, too. As previously reported by The Post, Long Island officials recently warned that fraudsters are using AI voice-cloning tools to impersonate victims' grandchildren in desperate phone calls targeting seniors. The scammers allegedly scour TikTok and other social media platforms for videos of young people speaking, then use the audio to generate realistic fake voices demanding bail money or emergency cash. "They're always trying to stay a step ahead," Suffolk County Police Commissioner Kevin Catalina previously told The Post. Catalina warned that the schemes are becoming "more and more sophisticated" as AI advances, with elderly victims losing thousands of dollars to convincing synthetic voices and spoofed phone numbers.
Share
Copy Link
AI chatbots like Google Gemini and ChatGPT are revealing people's real phone numbers, home addresses, and other personal details without consent. Privacy experts warn that personally identifiable information in training data is creating new risks for AI doxxing, with DeleteMe reporting a 400% increase in customer complaints about generative AI exposing personal information in just seven months.
AI chatbots are revealing people's real phone numbers and home addresses, creating what privacy experts are calling a new form of AI doxxing. A Reddit user reported receiving constant unwanted calls from strangers "looking for a lawyer, a product designer, a locksmith" after Google's generative AI allegedly started distributing their personal number
1
. In March, an Israeli software developer found his WhatsApp flooded with messages after Google Gemini incorrectly listed his number as customer service for a payment app1
. Meanwhile, a University of Washington PhD candidate discovered that Google Gemini readily provided her colleague's private cell phone number when prompted1
.
Source: MIT Tech Review
The scope of this problem extends far beyond isolated incidents. DeleteMe, a data removal company, reports that customer queries about generative AI exposing personal information have surged by 400% in the last seven months, reaching several thousand cases
1
. Among these privacy concerns, 55% specifically reference ChatGPT, 20% mention Google Gemini, 15% involve Claude, and 10% relate to other AI tools1
. Rob Shavell, DeleteMe's cofounder and CEO, says complaints typically fall into two categories: customers asking chatbots innocuous questions about themselves and receiving accurate home addresses, phone numbers, family members' names, or employer details, or discovering that AI chatbots reveal personal information about others, sometimes generating "plausible-but-wrong contact information"1
.Experts believe these privacy lapses stem from personally identifiable information being embedded in training data, though the exact mechanisms remain difficult to pinpoint
1
. A 2025 Cornell University study revealed that at least five leading AI companies—Anthropic, Google, Meta, Microsoft, and OpenAI—automatically use users' inputs to train their chatbots unless users opt out2
. Even more concerning, Meta and OpenAI retain user data indefinitely, meaning these AI models train not just on publicly available information like old phone books, but also on information users shared with chatbots years ago2
.
Source: CNET
CNET staffers conducted tests to see how easily AI chatbots reveal personal information, with alarming results
2
. Grok proved the most willing to share data, pulling multiple present and past addresses within seconds, though it added a note stating "Home addresses are private; I recommend contacting him through professional channels"2
. ChatGPT provided one CNET staffer with an old landline phone number and easily revealed a relative's cellphone number and address2
. In a separate test, journalist Matt Novak discovered that ChatGPT accurately delivered a real phone number he hadn't used in years, apparently pulled from a 2016 FOIA request PDF3
. Google Gemini and Claude showed more restraint, refusing to provide personal information in most tests2
.The data privacy risks extend beyond accidental exposure. Virgin Media O2 reported that scammers are deliberately planting fake customer-service numbers online for AI chatbots to regurgitate to users
4
. "AI tools are creating new opportunities for fraudsters to create realistic-looking fake numbers that appear through search results or chatbots, putting people at risk of calling a criminal rather than their trusted provider," warned Murray Mackenzie, the company's fraud prevention director4
.
Source: New York Post
Researchers at AI security company Aurascape explain that attackers accomplish this through "seeding poisoned content" across the web. "Attackers are quietly rewriting the web that AI systems read," said lead security researcher Qi Deng. "When you ask an assistant how to call your airline, it does exactly what it was designed to do, but with a customer support and reservations number that leads straight to a scammer instead of the real company"
4
. This manipulation of publicly available information creates a dangerous environment where users trust AI-generated results that lead them directly to fraudsters.Related Stories
For those affected by AI doxxing, recourse remains frustratingly limited. The Reddit user whose number was distributed reported that "standard support forms are a complete dead end" and hadn't received a single response while the harassment continued daily
4
. A Google spokesperson told MIT Technology Review that the company has safeguards to prevent personal information from appearing in AI features and reviews removal requests, but victims report difficulty getting help .A ClearNym spokesperson emphasized that "Gemini's problem is not a defect. It's the result of unchecked years of data brokerage practices that meet generative AI," noting that years of harvested personal data now collide with AI systems trained on massive internet datasets
4
. University of Washington researcher Yael Eiger highlighted the shift in accessibility: while her information technically existed online before, it was buried deep enough that almost nobody would find it. "Having your information be accessible to one audience, and then Gemini making it accessible to anyone" creates an entirely different privacy landscape4
. As AI systems become more sophisticated, experts warn that the intersection of data brokerage practices and generative AI will continue creating new vulnerabilities that individuals have little power to control.🟡 Atkinson),Summarized by
Navi
[1]
[3]
19 Jun 2025•Technology

18 Oct 2024•Technology

10 Dec 2025•Technology

1
Technology

2
Health

3
Policy and Regulation
