4 Sources
4 Sources
[1]
Google steps up AI scam protection in India, but gaps remain | TechCrunch
Google is bringing more AI muscle to India's fight against digital fraud, rolling out on-device scam detection for Pixel 9 devices and new screen-sharing alerts for financial apps. Digital fraud continues to rise in India as more people come online for the first time and increasingly rely on smartphones for payments, shopping and accessing government services. Frauds involving digital transactions accounted for more than half of all reported bank fraud in 2024 -- 13,516 cases resulting in losses of ₹5.2 billion (about $58.61 million), according to the Reserve Bank of India (RBI). Online scams caused an estimated ₹70 billion (roughly $789 million) in losses in the first five months of 2025, the Ministry of Home Affairs said. Many incidents likely go unreported, either because victims are unsure how to file a complaint or wish to avoid additional scrutiny. On Thursday, Google announced the expansion of its real-time scam detection feature, which uses Gemini Nano to analyze calls on-device and flag potential fraud without recording audio or sending data to Google's servers. The feature is off by default and applies only to calls from unknown numbers, and it plays a beep during the conversation to notify participants. It debuted in the U.S. in March as a beta for English-speaking Pixel 9 users. Google confirmed to TechCrunch that its on-device scam detection will initially work only on Pixel 9 and later models in India and will be limited to English-speaking users, with its warning also English only. That restricts its reach in a market where Android accounts for nearly 96% of smartphones, per StatCounter, but Pixel devices held less than 1% share in 2024. The language limitation is also notable in a country where most users primarily rely on non-English languages -- an audience that Google and others like Amazon have acknowledged by adding support for Indian languages across their services in recent years. The tech giant did say it was working to bring scam detection to non-Pixel Android phones, as well, without offering a timeline. Google also announced a pilot in India with financial apps Navi, Paytm, and Google Pay aimed at limiting screen-sharing scams, in which fraudsters persuade victims to share their screens to obtain one-time passwords, PINs and other credentials during a call. The feature was first announced at Google I/O in May and initially tested in the U.K. Users with devices running Android 11 or later will be able to access the alerts, which include a one-tap option to end the call and stop screen sharing. Google confirmed to TechCrunch that it plans to add more app partners and the feature will display alerts in Indian languages as well but did not provide details. For several months, Google has also been using its Play Protect service to restrict predatory loan apps in India by blocking the sideloading of third-party apps that request sensitive permissions often exploited for fraud. The company said the service blocked more than 115 million such installation attempts this year. Google Pay, meanwhile, surfaces more than a million warnings each week for transactions flagged as potentially fraudulent, according to the company. Google is also running its DigiKavach awareness campaign on digital fraud, which it said has reached more than 250 million people. The company has worked with the Reserve Bank of India to publish a public list of authorized digital lending apps and their associated non-banking financial companies to help limit malicious actors. Earlier this year, Google launched a Safety Charter in India to expand its AI-driven fraud detection and security efforts, part of a broader plan to deploy more AI tools in the country to address rising fraud. Yet Google still faces significant gaps in curbing digital fraud in India. The company -- like Apple -- has been questioned for allowing fake and misleading apps to appear on its app store despite review processes meant to block fraudulent submissions. In recent years, police and security researchers have flagged investment and loan apps used in scams that remained available on the Play Store until intervention. These cases underscore the challenges Google faces in policing an ecosystem that dominates the country's smartphone market.
[2]
Google Announces New AI Safety Tools for India, Expands Access to SynthID
Google is also bringing a new system that replaces OTP SMS flows Google, on Thursday, announced several new India-focused artificial intelligence (AI) safety updates. The Mountain View-based tech giant made the announcements during an event in New Delhi. Among the new initiatives and features coming to Indian users are protections for financial apps, new secure technology to replace SMS OTP flows, and expanded access to the SynthID AI watermarking detection tool. The company said these new updates will not only make AI safer to use but also provide individuals with tools to protect themselves from malicious usage of it. Google Brings New AI Safety Tools to India In a blog post, the tech giant detailed the new updates and initiatives it is bringing to India to "protect vulnerable audiences from online harm, build robust privacy and cybersecurity tools for enterprises and AI models that are representative, equitable and inclusive." Google said these features were being introduced ahead of India's AI Impact Summit, which is scheduled to be held between February 19-20, 2026. Google is partnering with its Google Pay division and financial apps Navi and Paytm to pilot a new screen-sharing scam alert feature in India. Available on Android 11 and newer, this feature will display an alert if a user opens one of these apps while screen sharing on a call with an unknown contact. The tech giant is also working on Enhanced Phone Number Verification (ePNV), a new Android-based security protocol that is designed to replace the existing SMS OTP flow for authentication. Google says the new system is a "secure, consented, SIM-based check" which eliminates the vulnerabilities that come with OTPs. Additionally, the company confirmed that the real-time Scam Detection feature, which flags potential scams on-device via Gemini Nano, is now rolling out to Pixel smartphones. The feature does not record audio or generate transcripts, and no data is sent to Google. Notably, this is turned off by default. The company also highlighted that Google Play Protect has now blocked more than 115 million attempts to "install sideloaded apps that use sensitive permissions that are frequently abused for financial fraud in India." SynthID, Google's proprietary AI watermarking and detection technology, is also being expanded to academia, researchers, and several media publishers.
[3]
Google outlines new AI safety push focused on kids, teens, and seniors
In a move towards responsible innovation, Google is prioritizing AI safety. Advanced on-device measures will protect vulnerable users, including children, teenagers, and seniors, from online threats. Google used its 'Safe and Trusted AI' event in New Delhi to lay out a broad update on how it's approaching AI safety ahead of the AI Impact Summit 2026. The message was fairly direct: if AI is going to scale responsibly, safety has to be treated like core infrastructure, not an afterthought. A major chunk of the announcement centred on protecting groups that are often most vulnerable online children, teenagers and the elderly through a mix of on-device safeguards, digital literacy efforts and wider ecosystem partnerships. Google is rolling out more protections that work directly on the device, making them faster, private and always active when needed. One of the biggest additions is real-time scam detection for phone calls on Pixel devices, powered by Gemini Nano. It analyses suspicious patterns on-device and alerts users without storing audio or sharing data with Google. Another pilot feature warns users if they are screen-sharing and open financial apps like Google Pay, Paytm or Navi while speaking to an unknown caller a common tactic in fraud cases. Alongside this, Google is expanding access to its SynthID watermarking and detection tools, opening them up to researchers and major Indian publishers, On the literacy side, Google is bringing its Learn and Explore Online (LEO) program to India in December 2025 to help teachers, parents and practitioners create age-appropriate online environments. Programs like "Sach Ke Sathi, DigiKavach for Seniors" continue to train older users across multiple cities, while the Super Searchers initiative has already reached teachers, students and broader communities through a scaled train-the-trainer model. Google.org is also granting $200,000 to the CyberPeace Foundation to build capacity, develop AI-driven defence tools and strengthen digital safety environments for children and teens, including through hackathons and collaborations with startups. The company also outlined how it is working with regulators and the broader ecosystem to raise safety standards. With the Reserve Bank of India, Google helped publish a verified list of authorised digital lending apps to curb fraud. It is collaborating with IIT Madras and CeRAI to advance AI safety research tailored to India's diverse languages and devices. On the cybersecurity front, tools like CodeMender aim to detect and patch vulnerabilities in open-source software autonomously. Google is also developing Enhanced Phone Number Verification (ePNV), which aims to replace SMS OTPs with a secure, SIM-based mechanism directly on the device. Through APAC Digital Futures Fund, Google.org is supporting research across the region with a $1 million commitment. The overall takeaway from Google's event is consistent: as AI becomes more integrated into public services, finance, education and everyday digital interactions, the guardrails around it have to evolve just as quickly -- especially for those most at risk.
[4]
Google Announces ePNV, Other Features to Protect Indian Users
Google has announced a flurry of initiatives aimed at strengthening digital safety in India as the country rapidly embraces AI. In a blog authored by Evan Kotsovinos, Vice-President of Privacy, Safety & Security, the company notes that "safety is the infrastructure for transformational AI", rather than simply an add-on. With more Indians coming online every month, Google emphasises the urgency of trusted and secure digital adoption in a landscape where scams are becoming increasingly sophisticated. The update lays out how Google intends to embed protections into everyday products, advance privacy and cyber resilience and partner with local communities. By doing so, Kotsovinos underlines India's potential not just as a market, but as a proving ground for globally scalable safety frameworks in the blog. The company is rolling out a new set of safety tools in India as scams grow more sophisticated and digital payments become routine. Speaking at 'A Dialogue on Safe and Trusted AI' held by Google yesterday in Delhi, Anirban Nandi, Director for Android Partnerships at Google India, said the company wants to shift India "towards a future where we can predict and prevent a potential scam", rather than respond after losses occur. Additionally, he warned that 80% of Indian adults have encountered online scams and that half never recover their money. At the heart of these changes are protections built directly into Android devices. Google already blocks 2 billion scam messages and calls in India each month. Furthermore, Nandi pointed out that Google Play Protect has stopped 115 million attempts to install sideloaded apps that misuse sensitive permissions commonly exploited in financial fraud. Google Pay also now surfaces over 1 million warnings every week for high-risk transactions. However, the newest tools introduced for India take a more proactive approach, focusing on the moments when users are most vulnerable: Alongside device-level protections, Google is working to help Indians navigate an online world where AI-generated images and videos are becoming increasingly difficult to distinguish. Google is widening access to SynthID, its AI-watermarking detector, to Indian publishers such as Jagran, Press Trust of India (PTI), and India Today. Furthermore, at the dialogue, Miriam Estrin, Google's Global Lead on Content Moderation and AI Content Responsibility, emphasised that users ultimately want to understand whether a piece of content is trustworthy, not merely whether it was produced by AI. "Just asking the question, is this generated by AI?' is not the same as asking, is this trustworthy?'" she said. Additionally, Estrin also warned that relying too heavily on visible labels for all synthetic content creates new risks. Bad actors, she noted, can "attach an AI label to a real image", turning labelling itself into a manipulation tool. Meanwhile, over-labelling can lead to "banner blindness" as users stop noticing warnings altogether. To address this, Google is expanding access to SynthID, which embeds a watermark invisibly inside the pixels themselves. This allows the watermark to survive common edits such as filters, stickers and colour changes, giving journalists and publishers a more reliable verification method. However, Estrin was clear that the technology is not flawless. If someone heavily blurs an image, crops it dramatically, or intentionally obscures the relevant pixel patterns, the watermark may be lost. Notably, MeitY released a draft synthetic information amendment to the IT Rules, 2021, seeking to mandate the labelling of any synthetically generated information. The amendment mandates that at least 10% of the visual display should be covered with the label, or 10% of the duration in the case of audio outputs. Google is also expanding programmes that aim to help children, teenagers and older adults navigate India's changing digital environment. Estrin said the company wants to "empower kids, teens and families" while ensuring that AI tools remain safe and trustworthy. Super Searchers is a classroom-facing programme that works directly with schoolchildren and teachers. It teaches them to "spot AI-generated content" and to think critically about what they see online, helping students build habits for interpreting images, videos and posts. Meanwhile, Google will introduce LEO (Learn and Explore Online) in December 2025 to help parents understand online risks and choose the right tools for their children. Estrin said the goal is to help families "understand how to keep their kids safe online" while still encouraging exploration and learning by building "foundational knowledge on how to use Google's parental tools and create age-appropriate online experiences". Additionally, the company is extending support to seniors through DigiKavach for Seniors, which offers in-person digital-safety sessions to help older users recognise scams, misleading links and coercive online behaviour, a group that fraudsters often target. India's rapid shift to digital payments, online services and AI-driven tools has widened both opportunity and risk. Google's announcements matter because many of the vulnerabilities being targeted, such as screen-sharing fraud, SMS-based authentication and the spread of synthetic media, have become routine entry points for scams and misinformation. Moreover, these weaknesses affect users across age groups, from schoolchildren encountering manipulated images to seniors facing coercive phone calls. At the same time, the movement towards SIM-based verification and on-device scam detection marks a shift away from easily intercepted systems that criminals regularly exploit. Furthermore, the expansion of programmes for teachers, parents and older adults underscores that technical fixes alone will not keep users safe; digital literacy plays a central role in how people interpret content and respond to threats. Consequently, the combined approach of product-level protections and education reflects growing recognition that trust and safety must evolve alongside India's accelerating digital adoption.
Share
Share
Copy Link
Google introduces multiple AI-driven safety tools in India, including on-device scam detection, screen-sharing alerts for financial apps, and enhanced authentication systems to protect vulnerable users from sophisticated digital fraud.
Google has announced a comprehensive suite of AI-powered safety tools specifically designed for the Indian market, addressing the country's escalating digital fraud crisis. The initiative comes as digital fraud continues to surge in India, with online scams causing an estimated ₹70 billion (roughly $789 million) in losses in just the first five months of 2025, according to the Ministry of Home Affairs
1
.The announcement was made during Google's 'Safe and Trusted AI' event in New Delhi, where the company emphasized that "safety is the infrastructure for transformational AI" rather than an afterthought
3
.
Source: Gadgets 360
The centerpiece of Google's new safety arsenal is real-time scam detection for phone calls, powered by Gemini Nano AI technology. This feature analyzes suspicious patterns directly on Pixel devices without recording audio or sharing data with Google's servers
1
. The system plays a beep during conversations to notify participants when potential fraud is detected.
Source: ET
However, the rollout faces significant limitations. Initially, the feature will only work on Pixel 9 and later models, restricting its reach in a market where Pixel devices held less than 1% share in 2024, despite Android accounting for nearly 96% of smartphones in India
1
. Additionally, the feature is currently limited to English-speaking users, a notable constraint in a country where most users primarily rely on non-English languages.Google is piloting a new screen-sharing scam alert feature in partnership with financial apps including Navi, Paytm, and Google Pay. This feature targets a common fraud tactic where scammers persuade victims to share their screens to obtain one-time passwords, PINs, and other credentials during calls
2
.Available on devices running Android 11 or later, the feature displays alerts when users open these financial apps while screen-sharing with unknown contacts. The alerts include a one-tap option to end the call and stop screen sharing immediately
1
.Google is developing Enhanced Phone Number Verification (ePNV), a new Android-based security protocol designed to replace the existing SMS OTP authentication system. This "secure, consented, SIM-based check" aims to eliminate vulnerabilities associated with traditional one-time passwords
2
.The new system represents a significant shift toward device-based authentication, reducing reliance on SMS messages that can be intercepted or compromised
4
.Google's existing Play Protect service has already blocked more than 115 million attempts to install sideloaded apps that misuse sensitive permissions commonly exploited for financial fraud in India
2
. Google Pay surfaces over one million warnings weekly for transactions flagged as potentially fraudulent.
Source: MediaNama
The company is also expanding access to SynthID, its AI watermarking and detection technology, to Indian publishers including Jagran, Press Trust of India (PTI), and India Today
4
. This technology embeds invisible watermarks in AI-generated content, helping journalists and publishers verify authenticity.Related Stories
Google is launching several educational programs targeting vulnerable populations. The Learn and Explore Online (LEO) program will debut in India in December 2025, helping teachers and parents create age-appropriate online environments
3
.The company's DigiKavach awareness campaign has reached more than 250 million people, while the Super Searchers initiative teaches students and teachers to identify AI-generated content and think critically about online information
4
.Google has partnered with the Reserve Bank of India to publish a verified list of authorized digital lending apps, helping users identify legitimate financial services
1
. The company is also collaborating with IIT Madras and CeRAI to advance AI safety research tailored to India's diverse languages and devices.Despite these efforts, significant gaps remain in Google's fraud prevention capabilities. The company continues to face criticism for allowing fake and misleading apps to appear on the Play Store, with police and security researchers flagging investment and loan apps used in scams that remained available until intervention
1
.Summarized by
Navi
17 Jun 2025•Technology

12 Feb 2025•Technology

03 Oct 2024•Technology

1
Business and Economy

2
Technology

3
Policy and Regulation
