11 Sources
[1]
Google to scale up AI-powered fraud detection and security operations in India | TechCrunch
Google has unveiled its Safety Charter in India, which will expand its AI-led developments for fraud detection and combating scams across the country, the company's largest market outside the United States. Digital fraud in India is rising. Fraud related to the Indian government's instant payment system UPI grew 85% year-over-year to nearly 11 billion Indian rupees ($127 million) last year, per the government's data. India also saw several instances of digital arrest scams, where fraudsters pose as officials to extort money via video calls and predatory loan apps. With its Safety Charter, Google aims to address some of these areas. The company has also launched its security engineering center in India, its fourth center after Dublin, Munich, and Malaga. Announced at the Google for India summit last year, the security engineering center (GSec) will allow Google to partner with the local community, including government, academia and students, and small and medium enterprises to create solutions to solve cybersecurity, privacy, safety, and AI problems, said Google VP of security engineering Heather Adkins in an interview with TechCrunch. Google has partnered with the Ministry of Home Affairs' Indian Cyber Crime Coordination Centre (I4C) to raise awareness of cybercrimes, the company said in a blog post. This builds upon the company's existing work, including the launch of its online fraud identification program, DigiKavach, which debuted in 2023 to restrict the harmful effects of malicious financial apps and predatory loan apps. With its GSec in India, Google will focus on three key areas, Adkins told TechCrunch: the phenomenon of online scams and fraud and how people are safe online; the cybersecurity of enterprises, government, and critical infrastructure; and building responsible AI. "These three areas will become part of our safety charter for India, and over the coming years... we want to use the fact that we have engineering capability here to solve for what's happening in India, close to where the users are," said Adkins. Globally, Google is utilizing AI to combat online scams and remove millions of ads and ad accounts. The company aims to deploy AI more extensively in India to combat digital fraud. Google Messages, which comes preinstalled on many Android devices, uses AI-powered Scam Detection that has helped protect users from over 500 million suspicious messages a month. Similarly, Google piloted its Play Protect in India last year, which it claims has blocked nearly 60 million attempts to install high-risk apps, resulting in the stopping of more than 220,000 unique apps on over 13 million devices. Google Pay, which is one of the top UPI-based payment apps in the country, also displayed 41 million warnings against transactions suspected to be potential scams. -- Adkins, a founding member of Google's security team who has been part of the internet company for over 23 years, discussed several other topics during an interview with TechCrunch: Adkins said one thing top of mind is the use and misuse of AI by malicious actors. "We're obviously tracking AI very closely, and up until now, we've mostly seen the large language models like Gemini used as productivity enhancements. For example, to make phishing scams a bit more effective -- especially if the actor and the target have different languages -- they can use the benefit of translation to make the scams more believable using deepfakes, images, video, etc.," said Adkins. Adkins said Google is conducting extensive testing of its AI models to ensure they understand what they should not do. "This is important for generated content that might be harmful, but also actions that it might take," said Akins. Google is working on frameworks, including the Secure AI Framework, to restrict the abuse of its Gemini models. However, to protect generative AI from being misused and abused by hackers in the future, the company sees the need for a framework to build safety for how multiple agents communicate. "The industry is moving very, very quickly [by] putting protocols out. It's almost like the early days of the internet, where everybody's releasing code in real time, and we're thinking about safety after the fact," said Adkins. Google does not want to introduce merely its own frameworks to limit the scope of generative AI being abused by hackers. Instead, Adkins said the company is working with the research community and developers. "One of the things you don't want to do is constrain yourself too much in the early research days," said Adkins. Alongside generative AI's potential for abuse by hackers, Adkins sees commercial surveillance vendors as a significant threat. These can include spyware makers, including NSO Group, which is infamous for its Pegasus spyware, or other small enterprises selling surveillance tools. "These are companies spun up all over the world, and they develop and make and sell a platform for hacking," said Adkins. "You might pay $20, you might pay $200,000, just depending on the sophistication of the platform, and it allows you to scale attacking people without any expertise on your own." Some of these vendors also sell their tools to spy on people in markets, including India. However, apart from being targeted by surveillance tools, the country has its own unique challenges in part for its size. The country sees not only AI-led deepfakes and voice cloning frauds, but also instances of digital arrests, which Adkins underlines are just regular scams adapted for the digital world. "You can see how quickly the threat actors themselves are advancing... I love studying cyber in this region because of that. It's often a hint of what we're going to see worldwide at some point," said Adkins. Google has long encouraged its users to use more secure authentication methods beyond passwords to protect their online presence. The company switched on multi-factor authentication (MFA) for all user accounts in the past, and also promotes hardware-based security keys, which Adkins mentioned by pointing to its employees actively using their laptops. Passwordless is also becoming a popular tech term, with various meanings. Nonetheless, expecting people to abandon passwords in a market like India is hard due to its vast demographics and diverse economic landscape. "We knew for a very long time that passwords were not secure. This concept of a multi-factor authentication was a step forward," said Adkins, adding that Indians likely favor SMS-based authentication over other MFA options.
[2]
Google Opens Cybersecurity Hub in Hyderabad to Strengthen India's Digital Safety Infra | AIM
The company also released a 'Safety Charter', aiming to enhance cybersecurity for its users. Google has launched its first Google Safety Engineering Centre (GSEC) in the Asia-Pacific region in Hyderabad to aid India's digital safety infrastructure, making it only the fourth such centre globally. The facility was inaugurated on Wednesday by Telangana CM Revanth Reddy, alongside state IT minister D Sridhar Babu and other top government officials. The launch follows Google's 'Safety Charter' for India's AI-led transformation, which was unveiled at the Safer with Google Summit in Delhi on June 17. The GSEC will focus on three core areas: protecting users from online fraud, strengthening enterprise and government cybersecurity and building responsible AI solutions. It is also set to serve as a regional hub for APAC to combat digital threats. Using AI and LLMs, the centre aims to deploy real-time scam alerts via Gemini Nano on Android, improve fraud detection across services like Pay, Search and Gmail, and boost defences like Google Play Protect. It will also tackle AI misuse through adversarial testing, red teaming and watermarking tools like SynthID. Calling it a proud moment for Telangana, Reddy praised Google's ethical philosophy and said, "This centre will create jobs, foster skills and boost India's cyber defence. Telangana is poised to become a trillion-dollar economy by 2035." With over a billion internet users, India's digital growth comes with rising vulnerability. According to Heather Adkins, founding member of the Google Security Team, Google Pay alone prevented βΉ13,000 crore worth of financial fraud in 2023. Yet, the threat looms large, with estimated cybercrime losses in India projected to hit βΉ20,000 crore in 2025. The government is also on high alert. The Digital Threat Report 2024 noted a 175% rise in phishing attacks on banking and financial services, while over half of business email compromise cases now involve AI-generated deepfakes. The CERT-In cybersecurity agency has responded with national cyber drills and a cyber crisis management plan, having tackled over 14 lakh incidents in 2022 alone. Wilson White, Google's VP for public policy, highlighted that Asia-Pacific is now the epicentre of digital scams, accounting for two-thirds of global fraud losses -- $688 billion in 2023. "AI can help detect 20 times more scam pages and eliminate millions of fake listings," he said.
[3]
What's inside Google's plan to fight scams in India with AI?
Google has introduced its Safety Charter in India to expand AI-driven fraud detection and combat scams. This initiative targets India, Google's largest market outside the U.S., where digital fraud is increasing. The Indian government's data indicates that fraud related to the Unified Payments Interface (UPI) rose 85% year-over-year, reaching nearly 11 billion Indian rupees ($127 million) last year. The rise of digital arrest scams, where fraudsters pose as officials to extort money, also highlights the problem. To address these issues, Google has launched a security engineering center in India (GSec), joining existing centers in Dublin, Munich, and Malaga. Announced at the Google for India summit last year, GSec aims to collaborate with local entities, including government, academia, SMEs, and students, to develop solutions for cybersecurity, privacy, safety, and AI challenges. This was stated by Google VP of security engineering Heather Adkins in an interview with TechCrunch. Google stated in a blog post that it has partnered with the Ministry of Home Affairs' Indian Cyber Crime Coordination Centre (I4C) to increase cybercrime awareness. This partnership builds on existing programs such as DigiKavach, launched in 2023 to mitigate harmful effects from malicious financial and predatory loan apps. According to Adkins, GSec in India will focus on online scams and fraud, enterprise and governmental cybersecurity, and responsible AI development. Google aims to utilize its engineering capabilities in India to address specific local challenges. Adkins stated, "These three areas will become part of our safety charter for India, and over the coming years... we want to use the fact that we have engineering capability here to solve for what's happening in India, close to where the users are." Globally, Google uses AI to combat online scams, removing millions of ads and ad accounts. The company intends to expand AI deployment in India to further combat digital fraud. Google Messages uses AI-powered Scam Detection, protecting users from over 500 million suspicious messages monthly. Last year, Google piloted Play Protect in India, which blocked nearly 60 million attempts to install high-risk apps, stopping more than 220,000 unique apps on over 13 million devices. Google Pay displayed 41 million warnings against potentially fraudulent transactions. Adkins, a founding member of Google's security team, discussed the misuse of AI tools, noting the potential for malicious actors to leverage AI. She noted the risk of threat actors adapting deepfakes, images, and translation utilities to refine phishing scams. Adkins said, "We're obviously tracking AI very closely, and up until now, we've mostly seen the large language models like Gemini used as productivity enhancements. For example, to make phishing scams a bit more effective -- especially if the actor and the target have different languages -- they can use the benefit of translation to make the scams more believable using deepfakes, images, video, etc." Google is conducting extensive AI model testing to ensure safe functionality and is developing frameworks like the Secure AI Framework to prevent misuse of Gemini models, according to Adkins. She stated, "This is important for generated content that might be harmful, but also actions that it might take." Adkins expressed concern about the rapid pace of protocol deployment within the industry and emphasized the need for safety considerations to be integrated early in the development process. Adkins stated, "The industry is moving very, very quickly [by] putting protocols out. It's almost like the early days of the internet, where everybody's releasing code in real time, and we're thinking about safety after the fact." Google is collaborating with the research community and developers rather than imposing its own frameworks to limit potential abuses of generative AI, according to Adkins. She said, "One of the things you don't want to do is constrain yourself too much in the early research days." In addition to the risks posed by generative AI, Adkins identified commercial surveillance vendors as a major threat, including spyware developers. She stated, "These are companies spun up all over the world, and they develop and make and sell a platform for hacking. You might pay $20, you might pay $200,000, just depending on the sophistication of the platform, and it allows you to scale attacking people without any expertise on your own." Adkins said that India faces unique challenges due to its size, including AI-driven fraud and digital arrest scams. She also stated, "You can see how quickly the threat actors themselves are advancing... I love studying cyber in this region because of that. It's often a hint of what we're going to see worldwide at some point." Regarding authentication, Adkins acknowledged the difficulty of transitioning away from passwords in a diverse market like India, despite the known security vulnerabilities of passwords. Adkins stated, "We knew for a very long time that passwords were not secure. This concept of a multi-factor authentication was a step forward," adding that Indians likely favor SMS-based authentication.
[4]
Here's How Google Uses AI to Protect Indian Users from Online Frauds, Scams
Google is collaborating with IIT Madras for Post-Quantum Cryptography Google unveiled its Safety Charter for India, highlighting how it is using artificial intelligence (AI) technology to identify and prevent instances of cybercrimes across its products. The Mountain View-based tech giant highlighted that with the rise of India's digital economy, the need for trust-based systems was high. The company is now using AI in its products, country-wide programmes, and to detect and remove vulnerabilities in enterprise software. Alongside, Google also highlighted the need to build AI responsibly. In a blog post, the tech giant detailed its achievements in successful identification and prevention of online fraud and scams across its consumer products, as well as enterprise software. Explaining the focus on cybersecurity, Google cited a report highlighting that UPI related frauds cost Indian users more than Rs. 1,087 crore in 2024, and the total financial losses from unchecked cybercrimes reportedly reached Rs. 20,000 crore in 2025. Google also mentioned that bad actors are rapidly adopting AI to enhance cybercrime techniques. Some of these include AI-generated content, deepfakes, and voice cloning to pull off convincing frauds and scams. The company is combining its policies and suite of security technologies with India's DigiKavach programme to better protect the country's digital landscape. Google has also partnered with the Indian Cyber Crime Coordination Centre (14C) to "strengthen its efforts towards user awareness on cybercrimes, over the next couple of months in a phased approach." Coming to the company's achievements in this space, the tech giant said it removed 247 million ads and suspended 2.9 million fraudulent accounts that were violating its policies, which also includes complying with the state and country-specific regulations. In Google Search, the company claimed to be using AI models to catch 20 times more scammy web pages before they appear on the results page. The platform is also said to have reduced instances of fraudulent websites impersonating customer service and governments by more than 80 percent and 70 percent, respectively. Google Message recently adopted the new AI-powered Scam Detection feature. The company claims the security tool is flagging more than 500 million suspicious messages every month. The feature also warns users when they open URLs sent by senders whose contact details are not saved. The warning message is said to have been shown more than 2.5 billion times. The company's app marketplace for Android, Google Play, is claimed to have blocked nearly six crore attempts to install high-risk apps. This included more than 220,000 unique apps that were being installed on more than 13 million devices. Its UPI app, Google Pay, also displayed 41 million warnings after the system detected the transactions being made were potential scams. Google is also working towards securing its enterprise-focused products from potential cybersecurity threats. The company initiated Project Zero in collaboration with DeepMind to discover previously unknown vulnerabilities in popular enterprise software such as SQLite. In the SQLite vulnerability, the company used an AI agent to detect the flaw. The company is also collaborating with IIT Madras to research Post-Quantum Cryptography (PQC). It refers to cryptographic algorithms that are designed to secure systems from potential threats caused by quantum computers. These algorithms are used for encryption, digital signatures, and key exchanges. Finally, on the responsible AI front, Google claimed that its models and infrastructure are thoroughly tested against adversarial attacks via both internal systems as well as AI-assisted red teaming efforts. For accuracy and labeling AI-generated content, the tech giant is using SynthID to embed an invisible watermark on text, audio, video, and images generated by its models. Google also requires its YouTube content creators to disclose AI-generated content. Additionally, the double-check feature in Gemini allows users to make the chatbot identify any inaccuracies by running a Google Search.
[5]
India leads many nations in tackling cyber threats, says top Google executive
Google Security's Heather Adkins highlights India's proactive approach to cybersecurity, surpassing many nations in building cyber resilience through government partnership and a skilled workforce. Google Security is establishing an engineering center in India to leverage local talent and enhance cyber safety measures.India is emerging as a global front-runner in cybersecurity, outpacing many countries in its proactive efforts to address digital threats, according to Heather Adkins, global VP of engineering for Google Security. Speaking on the sidelines of the Safer with Google India Summit, Adkins stressed that India's government has taken a lead in building cyber resilience. "They've done a very good job in getting involved quickly and partnering with companies. The workforce here and education levels in India are pretty high. There are parts of the world I go where they're just now starting to think about cyber security and they're much further behind India," she said. Adkins, who has spent over 20 years at Google, pointed to India's readiness to tackle evolving cyber threats even as geopolitical tensions risk escalating state-sponsored attacks globally. "It's a hot topic," she said, referring to the Indian government's engagement on cyber safety. To strengthen its role in this ecosystem, Google Security is setting up an engineering centre in India, leveraging the country's skilled talent pool to enhance cyber safety measures. Adkins underlined the dual nature of generative AI (Gen AI) in cybersecurity. While it enables cybercriminals to increase the speed and complexity of attacks, it also arms defenders with tools to outpace adversaries. "There's no doubt that we're seeing an increase in tempo and sophistication of attacks... But I also feel that today, more than ever before, enterprises have better tools," she said. "If I think about starting a company 23 years ago, cybersecurity looked primitive then. Today, most of the solutions you're going to buy have security built into them. So, you're in a much better place than you were, say, 20-30 years ago." She added that Gen AI will give cybersecurity teams an edge. "We will be able to leverage Gen AI to protect infrastructure in new ways that we've never thought of before and also at a speed that we've never been able to achieve before." Adkins also flagged the growing threat of state-backed cyber operations. "It's a question of who has more time. And, if you think about a well-funded nation state, maybe they'll create a project, put 100 people on it, and they just work on that project throughout the day... So, they often know more because they have more time, not because they're smarter. I would say they're more likely to be successful." While tech companies are developing robust defences, Adkins said user awareness remains equally critical. "Unlike the physical world where you have instincts and senses to identify something dangerous, the online world does not have a parallel. We have to build that," she said, emphasising the need to educate users in identifying malicious and fraudulent content.
[6]
Google releases 'safety charter' for India, senior exec details top cyber threat actors in the country
On digital privacy laws emerging around the world, including India's Digital Personal Data Protection Act, Adkins said the company advocates for standardisation and principles-based approach to enable a seamless experience as they adapt across countries, while factoring in local needs and innovation.India has a unique lens on how technology is being used today, given the scale of the country, the speed at which people are coming online, and the vibrancy of the business community, said Heather Adkins, VP security engineering, Google. Threat actors responding to this ecosystem also provides a useful view of the evolving threat landscape, she said, adding that patterns seen in India may be translated in other parts of the world. Google on Tuesday released a "safety charter" for India to address online scams and fraud, cybersecurity for government and businesses, and responsible artificial intelligence. The company is looking to deepen partnerships with the government, local organisations, and academia in these areas, said Adkins. Initiatives under the safety charter will be executed through the Google Security Engineering Centre being set up in a hub-and-spoke model across Delhi, Hyderabad and Bengaluru, she told ET. Local engagements help to understand patterns better and protect people globally, Adkins said. "What we might learn about a pattern in India will then be automatically translated to a user somewhere else in the world, which is very beneficial for us," she said, "and because of India's scale, you have so many people online, that gives us a lens that's very unique in the world in terms of what we can see." Fraudulent loan apps and "digital arrest" scams, for instance, were seen emerging in the country. On digital privacy laws emerging around the world, including India's Digital Personal Data Protection Act, Adkins said the company advocates for standardisation and principles-based approach to enable a seamless experience as they adapt across countries, while factoring in local needs and innovation. "Regulation works well when it addresses the problem and gets it solved, and so what we don't want to see is regulation that makes the other problems worse," she said. On the question of heightened threats during conflicts like the recent India-Pakistan tensions, Adkins said cybersecurity is now a bigger factor in conflicts as well as natural disasters across the world, with scammers trying to trick people via, say, donation links. "It's really easy for the scammers to pick up on current events and then use that to trick well-intentioned people out of money, out of personal information, into installing apps that are dangerous," she said. Threat actors are also using Gen AI for greater productivity, language translation, and research, and the company is "very concerned" about how the technology can make attacks easier, said Adkins. Sharing information and signals about these trends among partners will help tackle the problem, she said, adding that AI is also key in identifying fraud emails or removing malicious apps. Google is set to collaborate further with the ministry of home affairs, partnering with the Indian Cyber Crime Coordination Centre (I4C) for user awareness on cybercrimes over the next two months.
[7]
'Blueprint for online safety': Google launches 'safety charter' to save Indians from online frauds
'Blueprint for online safety': Google launches 'safety charter' to save Indians from online fraudsGoogle on Tuesday announced a major initiative to make India's digital space safer by unveiling its new 'Safety Charter' during the 'Safer with Google India Summit'. The initiative is focused on protecting users from online fraud, boosting cybersecurity for critical infrastructure and ensuring that AI is developed responsibly. This Safety Charter is built around three key goals -- keeping internet users safe from scams and frauds, strengthening cybersecurity for governments and businesses and building responsible AI systems that protect people.
[8]
Google Unveils Safety Charter to Safeguard AI-Led Digital Transformation in India
Google commits USD 20 million+ to cybersecurity capacity building in India. Google on Monday unveiled its "Safety Charter for India's AI-led Transformation," outlining a framework to bolster trust and security in the country's evolving digital ecosystem. The charter underscores Google's commitment to safeguarding users, enterprises, and public institutions amid the growing integration of artificial intelligence (AI) across services and infrastructure. Also Read: Microsoft to Invest USD 3 Billion in India to Boost AI, Cloud, and Skilling: CEO Google noted that while India's digital economy has benefited from increased access, affordable devices, and widespread adoption of digital services, the foundation of this growth -- user trust -- must be actively protected. With AI enabling both innovation and increasingly sophisticated cyber threats, Google emphasized the importance of responsible technology stewardship. "The digital sphere can be an engine of growth only as long as the citizenry that use it feel safe in doing so. As AI proliferates deeper into our digital lives, it's crucial that we, as responsible stewards of India's tech industry, ensure that the trust reposed by users in India's digital ecosystem remains unchanged," said a Google blog post authored by Heather Adkins, VP, Security Engineering, and Preeti Lobana, VP and Country Manager, India, Google. Google said its safety charter is built on the framework of user safety, cybersecurity and responsible AI. "Under this charter, we shared how AI is eliminating vulnerabilities in enterprise software, how Google's investment in its products and programs are keeping users safe, and the various ways in which AI is closing the gap between attackers and defenders. Think of it as our blueprint for tackling the online world's new challenges, in collaboration with the wider ecosystem, under three key themes," the blog post read. A key focus of the Safety Charter is protecting end users from online fraud and scams, which have grown more complex through the use of AI-generated content, deepfakes, and voice cloning. Financial fraud, particularly related to UPI, cost Indians over Rs 1,087 crore in 2024, with industry estimates warning of potential losses of up to Rs 20,000 crore in 2025 if left unaddressed. Google's response includes scaling its DigiKavach initiative, which combines in-product safeguards with user awareness campaigns, reaching over 177 million people. Building on the move further, Google said it has partnered with the Indian Cyber Crime Coordination Centre (I4C) under the Ministry of Home Affairs to expand cybercrime awareness efforts across the country over the next couple of months in a phased approach. The company also reported the removal of over 247 million ads and the suspension of 2.9 million accounts in 2024 for policy violations, including those linked to financial fraud. Enhancements in Google Search now enable the detection of 20 times more scam-related pages using AI, including LLMs, while globally, targeted protections have reduced impersonation attacks on customer service and government portals by 80 percent and 70 percent, respectively. Also Read: Europe's Fastest AI Supercomputer, Jupiter, Launched with Nvidia Grace Hopper Platform On Android, Google said Google Messages blocks over 500 million suspicious messages monthly using AI-powered Scam Detection, and users receive more than 2.5 billion warnings against suspicious links from unknown senders. "This smart detection happens all on devices to keep conversations private." "Globally, Google Play Protect scans over 100 billion installed apps daily for malware across billions of devices," the blog post said. Since the launch of the Play Protect pilot in India in late 2024, nearly 60 million attempts to install high-risk apps have been prevented, involving stopping more than 220,000 unique apps on over 13 million devices. On Google Pay, over 41 million scam-related transaction warnings were issued last year, safeguarding Indian users. In strengthening cybersecurity for public and enterprise infrastructure, Google reiterated its secure-by-design and secure-by-default approach. The company's Mandiant-led threat intelligence and Google Cloud's M-Trends report continue to inform organizations about emerging threats, such as ransomware and cloud breaches. Additionally, a collaboration with DeepMind led to the first known case of an AI agent discovering a real-world memory-safety vulnerability in the SQLite database. Through Google.org, the company said it has committed an additional USD 5 million to The Asia Foundation (in addition to an earlier funding of USD 15 million), expanding the APAC Cybersecurity Fund to support MSMEs and students across India, including new cybersecurity clinics in partnership with Indian universities. Google is also working with IIT Madras on Post-Quantum Cryptography (PQC) research. On responsible AI development, Google highlighted its values-driven approach, grounded in internal risk taxonomies and its AI Principles. The company says it employs rigorous testing, including AI-assisted red teaming, to identify and mitigate risks in its models. "We are investing in tools to help identify AI-generated content. Our SynthID technology embeds an imperceptible, digital watermark directly into AI-generated content, with over 10 billion pieces of content already watermarked. We also require creators to disclose AI-generated content on YouTube and label synthetic images in Google Search," the Google team said in a blog post on June 17, 2025. In the Indian context, Google is refining AI models through initiatives like Gemini Language Testing and IndicGenBench to ensure accuracy and usability across 29 Indic languages. The Safety Charter concludes with a call for collective action, urging cross-sector collaboration among government bodies, industry, and civil society. Google cited partnerships with entities such as the Department of Telecom, the Ministry of Home Affairs, SEBI, and global initiatives like the Global Signals Exchange and Partnering for Protection as vital to strengthening systemic resilience. "Safety is a shared responsibility," the company said, adding that Google's Safety Charter for India's AI-led transformation is its attempt to share how the company is leveraging AI's potential to secure the foundation of India's digital economy. Also Read: Microsoft Expands Sovereign Cloud Offerings in Europe, AI Assistant Steijn Meal Planning, and More Swift expansion of India's digital economy has cemented the country's status as a "powerhouse", Google India's Country Manager and Vice President Preeti Lobana has said in an interview to PTI, adding that the US tech firm is "deeply committed to having a rich, thriving digital ecosystem when it comes to Android or Play Store." New technologies, like artificial intelligence, are unleashing creative potential, but at the same time, bringing to the fore challenges around deepfakes, she acknowledged. "And therefore, our efforts to make sure that whatever content is created using our AI, there are watermarks on that, and then enabling or sharing tools through which a wider section of users can upload some of this content to be able to identify the 'synthID'," Lobana said, according to the report. Google sees India as a critical market, Lobana was quoted as saying, adding that with the country on track to reach a USD 1-trillion digital economy in the near future, the tech giant hopes to leverage its expertise in advertising, cloud technology, and advanced AI to contribute to the nation's economic growth. "So, if you look at the digital landscape in India, (it is) so unique... think about the adoption. Who would have imagined a few years ago that UPI and therefore, consequently, you know, payment systems like Google Pay would be so widely adopted," she reportedly said. "As you know, we are looking at a 1 trillion digital economy in the next few years... so how do we bring the best of Google ads, our cloud and cutting-edge AI to power India's economic momentum," she said, outlining Google's priorities for Indian market. As per Google, the charter is a blueprint for tackling the online world's new challenges collaboratively with the wider ecosystem. According to the report, Lobana further noted that India was the launchpad for YouTube Shorts and GPay -- showcasing the key role the market plays in Google's global strategy. "So if you think about the short form video 'shorts' came on YouTube first to India... Look at the scale of Google Pay in India. So that is where we launched the product, taking the learnings overseas. So India is a very critical market for us, and we have made investments in putting a lot of large teams here," she reportedly said. India has a unique lens on how technology is being used today, given the scale of the country, the speed at which people are coming online, and the vibrancy of the business community, said Heather Adkins, VP, Security Engineering, Google, according to an Economic Times report. Initiatives under the Safety Charter will be executed through the Google Security Engineering Centre, being set up in a hub-and-spoke model across Delhi, Hyderabad, and Bengaluru, she reportedly said. "What we might learn about a pattern in India will then be automatically translated to a user somewhere else in the world, which is very beneficial for us," she reportedly said, "and because of India's scale, you have so many people online, that gives us a lens that's very unique in the world in terms of what we can see."
[9]
Google Launches Safety Charter To Protect Indians From Online Fraud, Deepfakes
Google has launched Safety Charter in India, aiming to bolster its efforts in AI-led fraud detection and combat the surge in digital scams across the country. The tech giant's Safety Charter would be a manual or blueprint for the roadblocks in the AI-world, which aims to prioritise the end user's safety, offer cybersecurity for government and enterprise infrastructure as well as building responsible AI. "Under this charter, we shared how AI is eliminating vulnerabilities in enterprise software, how Google's investment in its products and programs are keeping users safe, and the various ways in which AI is closing the gap between attackers and defenders," Google said in a blog post. The development comes two months after the Alphabet-owned company said it removed 247.4 Mn advertisements and suspended nearly 2.9 Mn advertiser accounts in India last year for violating its advertisement policy. Of the many violations taken into account, the top five were ads related to financial services, followed by issues involving trademark misuse, abuse of the ad network, personalised advertising practices and gambling and games-related content. To note, cyber frauds in India recorded 13,384 cases, involving a total amount of INR 107.21 Cr, between April and December 2024, minister of state (MoS) for finance Pankaj Chaudhary informed the Lok Sabha in a written reply, earlier in March. On an enterprise level, servers of huge companies have been hacked to extract personal information of users and customers. For instance, rental car marketplace Zoomcar informed the SEC that it suffered a cybersecurity incident that led to a hacker accessing personal information of close to 8.4 Mn users, last week.
[10]
Google Unveils Safety Charter to Combat AI Cyberattacks in India
Google has released its Safety Charter for India, highlighting how artificial intelligence (AI) tools are helping to curb the issue of cyberattacks and scams, according to a blog post by the tech giant. It also noted how AI is eliminating vulnerabilities in company software, including various ways in which the new technology is "closing the gap between attackers and defenders." The US-based tech company highlighted, under Content Responsibility, how its SynthID technology is helping in identifying AI-generated content on its platforms by embedding a digital watermark directly into the content. Ironically, YouTube, a Google-owned platform, recently started testing an AI-generated video overview feature, but the company was silent on the identification of these snippets. Additionally, the company revealed the number of web pages its Large Language Model (LLM) has blocked. It also said that its AI-powered scam detection feature is currently blocking about 500 million suspicious Google Messages on a monthly basis. In the blog post, Google called the charter a "blueprint for tackling the online world's new challenges." On responsible AI, the company said that it is investing in AI-powered tools like SynthID to make synthetic content easily identifiable. The tool has been able to watermark over 10 billion pieces of content already, the tech giant claimed. In August 2023, Google DeepMind launched SynthID, a tool for watermarking and identifying synthetic or AI-generated images. "This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye but detectable for identification," the US-based tech company said. The tool adds a hidden watermark to AI-generated text similar to a digital signature that helps in identifying the source of the content without hindering the user experience. When an AI model creates a piece of content, Google's SynthID embeds a unique pattern of code within it, which is designed to be invisible to the users. Hence, to a lay user, the text reads like any other piece of content. A user, if they wish to validate the originality of the text, can feed the same into another tool that can read this embedded code or watermark. While unveiling the safety charter for India, Google said that it also requires creators to disclose AI-generated content on YouTube and label synthetic images in Google Search. Its 'double-check' feature in Gemini helps users identify potentially inaccurate statements by quickly cross-referencing with Google Search. Although the tech giant is talking about responsible AI, its recent announcement partially contradicts this stance. In April 2025, YouTube started testing its AI-generated video overview tool, which creates a scrollable carousel of short video clips that show up in a user's YouTube search results. This allows them to quickly browse AI-selected snippets that the system deems most relevant, rather than having them click into a full video. However, YouTube did not mention whether these synthetic videos will be watermarked as well or whether it will disclose the origin of such AI-generated video snippets. The feature also triggered a debate around copyright issues related to the same. India's Copyright Act grants creators exclusive rights. But the fair-dealing exception only covers objectives such as research, review, private study, or news reporting. Hence, the same might not apply to commercial discovery snippets like these AI-generated previews. In the blog post, the tech giant also revealed the impact AI-powered tools have in curbing cyberfraud and scams. According to the company, the DigiKavach campaign, launched in 2023, reached 177 million users. On top of this, in 2024, Google removed 247 million ads and suspended 2.9 million accounts for violating Google ads policies, which the company claims has "dramatically reduced fraudulent financial ads." The tech giant said that with the integration of "advanced AI, including LLMs," it now catches 20 times more "scammy pages." "Globally, targeted protections have slashed attacks like impersonating customer service or government sites by over 80% and 70% respectively," as per the blog post. Globally, Google Play Protect, a tool that checks apps and devices for harmful behaviour, scans over 100 billion installed apps daily for malware, the tech company revealed. In India, since its Play Protect pilot rolled out in October 2024, the tool has blocked nearly 6 crore attempts to install high-risk apps and stopped more than 220,000 unique apps on over 13 million devices. The company displayed about 4.1 crore warnings against transactions suspected to be potential scams. Similarly, on its email service, the company claims to block more than 99.9% of spam, phishing, and malware globally. In April 2025, the Ministry of Electronics and Information Technology (MeitY) launched India's first Digital Threat Report, 2024. In the report, MeitY pointed out that cybercriminals are increasingly using AI-powered tools to exploit vulnerabilities in the cybersecurity systems of financial firms. According to the report, the average cost of a data breach reached an all-time high of $4.88 million globally in 2024, up 10% from 2023, while in India, it was $2.18 million. It said that AI-driven cyberattacks will become one of the most "scalable and adaptable threats" by 2025, challenging traditional defences and requiring innovative countermeasures. Hence, collaboration between private players and the government is important for curbing cyberattacks and scams as bad actors start adopting more sophisticated methods to lure the vulnerable people. Google's Safety Charter for India might help pave the way for a more secure internet for Indian citizens.
[11]
As AI grows, Google introduces Safety Charter in India to fight scams and boost cybersecurity
On Google Messages, AI now helps block over 500 million scam texts every month. Google has unveiled its Safety Charter in India aimed at securing the country's digital space as artificial intelligence (AI) becomes a bigger part of everyday life. The Safety Charter focuses on three main areas: Protecting users from online scams, strengthening cybersecurity for public and private sectors, and building AI responsibly. "As AI proliferates deeper into our digital lives, it's crucial that we, as responsible stewards of India's tech industry, ensure that the trust reposed by users in India's digital ecosystem remains unchanged," Google announced. "Towards this, we're excited to share Google's Safety Charter for India's AI-led transformation." Online frauds in India are becoming more advanced with scammers using tools like deepfake videos and AI-generated voice clones. Google says UPI-related frauds alone cost users over Rs 1,087 crore in 2024, and this could increase sharply if not tackled. To address this, Google is expanding its DigiKavach program which detects scams, raises awareness and protects users from fraud. So far, this initiative has reached 177 million people, helping them identify emerging fraud tactics. Building on this momentum, Google has now partnered with the Indian Cyber Crime Coordination Centre (I4C) to further boost user awareness. The company claims that it removed 247 million scam ads and suspended 2.9 million accounts for violating Google ads policies last year. On Google Messages, AI now helps block over 500 million scam texts every month. On Google Pay, 41 million scam warnings have been issued to Indian users. Meanwhile, Google Play Protect stopped 6 crore high-risk app downloads since its India rollout in 2024. Also read: OpenAI rolls out ChatGPT image generation tool to WhatsApp: Here's how you can use it Google revealed that its Project Zero team discovered a previously unknown, exploitable vulnerability in SQLite. Additionally, the company is collaborating with IIT Madras on Post-Quantum Cryptography to improve online privacy and data protection for the future. "Through Google.org, we are providing US$5 million in support to The Asia Foundation (in addition to an earlier funding of $15 million) to expand the APAC Cybersecurity Fund's reach, enabling the pilot of more than 10 new cybersecurity clinics, including strategic partnerships with Indian universities to strengthen the cybersecurity capabilities of local MSMEs and students," the tech giant added. Google says it's embedding responsible AI practices across its products. This includes testing AI models thoroughly, using tools like SynthID to watermark AI-generated content and labeling AI-created images and videos. "Google's Safety Charter for India's AI-led transformation is our attempt to share how we are leveraging AI's incredible potential to secure the foundation of India's digital economy: trust," the company said.
Share
Copy Link
Google unveils its Safety Charter and opens a new security engineering center in India, leveraging AI to enhance fraud detection and cybersecurity measures across its products and services.
Google has unveiled its Safety Charter in India, marking a significant expansion of its AI-led developments for fraud detection and combating scams across the country 1. This initiative comes as India, Google's largest market outside the United States, faces rising digital fraud challenges. The Indian government reported an 85% year-over-year increase in fraud related to the Unified Payments Interface (UPI), amounting to nearly 11 billion Indian rupees ($127 million) last year 13.
Source: Economic Times
As part of its commitment to enhancing cybersecurity in India, Google has launched its first Google Safety Engineering Centre (GSEC) in the Asia-Pacific region, located in Hyderabad 2. This center, the fourth of its kind globally after Dublin, Munich, and Malaga, aims to strengthen India's digital safety infrastructure 12.
The GSEC in India will concentrate on three key areas:
Google has partnered with the Ministry of Home Affairs' Indian Cyber Crime Coordination Centre (I4C) to raise awareness about cybercrimes 1. The company is also collaborating with IIT Madras for research on Post-Quantum Cryptography (PQC) to secure systems against potential threats from quantum computers 4.
Google is leveraging AI across its products and services to enhance cybersecurity:
Source: NDTV Gadgets 360
Heather Adkins, Google VP of security engineering, highlighted the potential misuse of AI by malicious actors, including the use of large language models to enhance phishing scams and create more convincing deepfakes 13. To counter these threats, Google is:
According to Adkins, India is emerging as a global front-runner in cybersecurity, outpacing many countries in its proactive efforts to address digital threats 5. The Indian government's engagement and partnership with companies, combined with the country's high education levels and skilled workforce, have contributed to this leadership position 5.
Source: Economic Times
As the cybersecurity landscape evolves, Google's initiatives in India aim to leverage local talent and enhance cyber safety measures. The company's focus on AI-driven solutions and collaboration with local entities positions India as a key player in the global fight against digital fraud and cybercrimes.
Summarized by
Navi
[1]
[2]
Analytics India Magazine
|Google Opens Cybersecurity Hub in Hyderabad to Strengthen India's Digital Safety Infra | AIM[4]
Apple's senior VP of Hardware Technologies, Johny Srouji, reveals the company's interest in using generative AI to accelerate chip design processes, potentially revolutionizing their approach to custom silicon development.
11 Sources
Technology
20 hrs ago
11 Sources
Technology
20 hrs ago
A new study reveals that AI reasoning models produce significantly higher COβ emissions compared to concise models when answering questions, highlighting the environmental impact of advanced AI technologies.
8 Sources
Technology
12 hrs ago
8 Sources
Technology
12 hrs ago
Meta is reportedly in discussions to bring on former GitHub CEO Nat Friedman and AI investor Daniel Gross to bolster its artificial intelligence efforts, potentially including a partial buyout of their venture fund NFDG.
7 Sources
Business and Economy
20 hrs ago
7 Sources
Business and Economy
20 hrs ago
OpenAI executives anticipate that upcoming AI models will pose a higher risk for potential misuse in bioweapons development, prompting increased safety measures and industry-wide concerns.
2 Sources
Technology
12 hrs ago
2 Sources
Technology
12 hrs ago
European drone manufacturers are flocking to Ukraine, using the ongoing conflict as a real-world laboratory to test and improve their technologies, with implications for both military and civilian applications.
4 Sources
Technology
12 hrs ago
4 Sources
Technology
12 hrs ago