7 Sources
7 Sources
[1]
Google will pay you up to $30,000 in rewards to find bugs in its AI products
Aaims to tackle past confusion concerning in-scope bugs and problems. Google has launched a new bug bounty program aimed at addressing security flaws and bugs in products related to artificial intelligence (AI). Also: Google just gave older smart home devices a useful upgrade for free - including these Nest models On Monday, Google security engineering managers Jason Parsons and Zak Bennett said in a blog post that the new program, an extension of the tech giant's existing Abuse Vulnerability Reward Program (VRP), will incentivize researchers and bug bounty hunters to focus on "high-impact abuse issues and security vulnerabilities" in Google products and services. Researchers have earned more than $430,000 since 2023, when Google's bug bounties expanded to include AI-related issues. Now, it is hoped that a standalone program will encourage even more reports -- which could be crucial for the tech giant as it continues to integrate AI into its digital product suite. Google has separated potentially acceptable reports into the following areas: In addition, Google will consider reports detailing AI-related issues such as unauthorized product usage, cross-user denial of service, and other forms of abuse. Also: Google may shift to risk-based Android security patch rollouts - what that means for you Products included in the new bug bounty program include Gemini, Google Search, AI Studio, and Google Workspace. The Google engineers have been careful to point out specific out-of-scope items. These include jailbreaks, content-based issues, and AI hallucinations. The team noted at the end of last year that while some of these areas are of great interest to researchers, there can be difficulties in replicating the findings. For example, a jailbreak may only impact a user's own session. Also: This fundamental Android feature is 'absolutely not' going away, says Google - but it is changing "The team is aware of the community interest and continues to reassess our program scope around these issues," Google said. Furthermore, issues found in Vertex AI or other Google Cloud products are not in scope for this program and should be reported via the company's Google Cloud VRP. Reports accepted by Google provide different financial rewards and incentives, with payouts for most reports ranging from $500 to $20,000. For example, a bug bounty describing a severe rogue action could earn a researcher up to $10,000, whereas an access control bypass might pay out up to $2,500. Also: Your Android phone's most powerful security feature is off by default and hidden - turn it on now However, more cash may be on offer depending on the quality of reports and the "novelty" factor of reported vulnerabilities. The new program adopts the same approach as Google's wider VRP, and a bonus of up to $10,000 -- bringing the total to $30,000 -- for novel attacks is available. "We're excited to be launching this new program, and we hope our valued researchers are too!" the engineers said.
[2]
Google's AI bounty program pays bug hunters up to $30K
On Monday, Google launched a new reward program dedicated specifically to finding bugs in AI products. Google's list of qualifying bugs includes examples of the kind of rogue actions it's looking for, like indirectly injecting an AI prompt that causes Google Home to unlock a door, or a data exfiltration prompt injection that summarizes all of someone's email and sends the summary to the attacker's own account. The new program clarifies what constitutes an AI bug, breaking them down as issues that use a large language model or a generative AI system to cause harm or take advantage of a security loophole, with rogue actions at the top of the list. This includes modifying someone's account or data to impede their security or do something unwanted, like one flaw exposed previously that could open smart shutters and turn off the lights using a poisoned Google Calendar event. Bug hunters have already raked in over $430,000 during the two years since the company officially started inviting AI researchers to root out potential avenues to abuse AI features in its products. Simply getting Gemini to hallucinate will not cut it. The company says that issues related to content produced by AI products -- such as generating hate speech or copyright-infringing content -- should be reported to the feedback channel within the product itself. According to Google, that way its AI safety teams can "diagnose the model's behavior and implement the necessary long-term, model-wide safety training." Along with the new AI reward program, Google also announced on Monday an AI agent that patches vulnerable code called CodeMender. The company says it has used to patch "72 security fixes to open source projects" after vetting by a human researcher. The $20,000 prize is awarded for rooting out rogue actions on Google's "flagship" products Search, Gemini Apps, and core Workspace applications like Gmail and Drive. Multipliers for report quality and a novelty bonus are also available, which could bring the total amount up to $30,000. The price drops for bugs found on Google's other products, like Jules or NotebookLM, and for lower-tier abuses, such as stealing secret model parameters.
[3]
Google's new AI bug bounty program pays up to $30,000 for flaws
This week, Google has launched an AI Vulnerability Reward Program dedicated to security researchers who find and report flaws in the company's AI systems. The new bug bounty program focuses on the most impactful issues in the highest-profile AI products, including but not limited to Google Search (on google.com), Gemini Apps (Web, Android, and iOS), and Google Workspace core applications (e.g., Gmail, Drive, Meet, Calendar, and others). Other in-scope products include AI features in high-sensitivity Google AI products, such as AI Studio and Jules, as well as Google Workspace non-core apps and other AI integrations in Google products. The rewards for vulnerabilities can reach up to $30,000 for quality reports with novelty bonus multipliers, while a standard security flaw report detailing security bugs that could trigger rogue actions in a flagship product comes with a top bounty of up to $20,000. Researchers can also get a $15,000 award for sensitive data exfiltration bugs, and up to $5,000 for phishing enablement and model theft issues. "In October 2023, we announced Google's reward criteria for reporting bugs in AI product, extending our Abuse Vulnerability Reward Program (VRP) to foster third-party discovery and reporting of issues and vulnerabilities specific to our AI systems," Google said. "As we celebrate the second year of AI bug bounties at Google, we're excited to discuss what we've learned, and to announce the launch of our new, dedicated AI Vulnerability Reward Program!" In March, the company also announced that it had awarded almost $12 million in bug bounty rewards to 660 researchers who discovered and reported security bugs through the company's Vulnerability Reward Program (VRP) in 2024. Google has awarded $65 million in bug bounties since its first vulnerability reward program went live in 2010, with the highest reward paid last year exceeding $110,000. One year earlier, in 2023, the search giant also paid $10 million to 632 researchers for responsibly reporting security flaws in its products and services.
[4]
Google's ready to pay up to $20,000 if you can break Gemini very, very badly
Google shares new bug bounty details for AI with its Vulnerability Reward Program. It's not nice to be a bully. At least, not to people. But for a certain segment of AI "fans," there's nothing more fun than ganging up on an AI chatbot and trying to get it to do basically anything other than what's intended. That can include hallucinating wildly incorrect answers, or even convincing the bot to ignore restrictions that its creators have tried to enforce. And now if you're good enough at breaking AI bots in epic fashion, Google might just be willing to pay you for it.
[5]
Google launches AI bug bounties - earn up to $30,000 if you can hack Gemini
Content-based issues like hallucinations aren't covered by the VRP Google has announced a new AI Vulnerability Reward Program (VRP) for researchers focused on finding security issues and bugs in its AI tools. The news comes around two years after Google extended its Abuse VRP, which Security Engineering Managers Jason Parsons and Zak Bennett described as "a huge success for Google's collaboration with AI researchers." Since creating the program, Google has awarded bug hunters over $430,000 in rewards for AI products alone, highlighting the size of the opportunity that lays ahead and the importance of stamping out bugs in an increasingly connected and AI-powered world. Parsons and Bennett admitted "the scope of AI rewards wasn't always clear" and that "there was confusion regarding how [Google] handle[s] AI-related abuse issues," hence the update. The AI VRP consists of eight separate categories: S1 and S2, and A1 through A6. The most serious, S1, is described as "attacks that modify the state of the victim's account or data with a clear security impact." Other vulnerabilities include data exfiltration, denial of service and prompt injections. Bug hunters can earn up to $20,000 with the AI VRP, with bonuses for report quality and novelty potentially raising payments up to $30,000. Flagship products offer the highest rewards, and include Google Search, Gemini Apps and Google Workspace. Products like AI Studio, Jules and non-core Google Workspace applications fall into a lower tier. The Security Engineering Managers also used the post to highlight the distinction between security/abuse bugs and content-related issues (like hallucinations and copyright issues), with the latter not being covered by the VRP. "Please, continue to report content-based issues, including jailbreaks and alignment issues - but please report via in-product feedback, and not through the VRP," Google notes.
[6]
Ethical hackers invited: Google launches Gemini AI bug bounty
Google has introduced an AI Vulnerability Reward Program that pays researchers for discovering critical security flaws in its Gemini AI systems. Rewards can reach up to $20,000 for severe vulnerabilities, especially those affecting major platforms such as Google Search and the Gemini app. Google has established an AI Vulnerability Reward Program to compensate researchers for finding critical security flaws in its Gemini AI. The program offers financial rewards for discovering exploits that present a tangible danger to users or the platform. This dedicated initiative is designed to compensate security researchers who uncover specific categories of high-risk AI bugs. The program targets vulnerabilities that could allow an attacker to interfere with a user's Google account or exploits that enable the extraction of information about the internal architecture and workings of Gemini itself. To be eligible for a reward, a discovered vulnerability must have a significant impact that goes beyond simply causing the AI to generate embarrassing, nonsensical, or factually incorrect answers. Bypassing content restrictions to produce unconventional responses is not considered a qualifying security flaw under this program, which prioritizes demonstrable security risks. For researchers who manage to uncover and document such impactful exploits, the potential compensation is substantial. The most severe vulnerabilities, particularly those affecting flagship AI products like Google Search and the Gemini application, can command rewards of up to $20,000. An example of a high-impact exploit that would meet the program's criteria is a technique that tricks Gemini into embedding a phishing link into one of its responses within the Search AI Mode. This type of vulnerability is considered critical due to its direct potential to compromise user security. The overarching goal of the new reward program is to encourage ethical security researchers to actively identify and report serious exploits. By providing a formal channel and financial incentives, Google aims to ensure that these critical vulnerabilities are discovered and addressed by internal teams before they can be found and utilized by malicious actors. This proactive security measure is intended to protect the stability of the Gemini platform and maintain its reputation among users.
[7]
Google Launches AI Bug Bounty Program with Rewards Up to Rs. 26 Lakh
The tech giant's focus is on "rogue actions." These happen when an AI is tricked into doing something it should not. For example, AI could be made to unlock smart home devices or send private email data without permission. Not all AI mistakes will fetch rewards for bounty hunters. Only issues that affect security, data, or safety will be considered. Simple errors or wrong answers from AI can be reported through regular feedback tools. Google has of up to $20,000 for big bugs in important products like Google Search, Gemini Apps, Gmail, and Drive. With additional bonuses for high-quality or novel reports, the total reward can reach up to $30,000. Smaller rewards are also available for minor issues in other tools like NotebookLM or Jules. The tech firm launched the (AI VRP) in October of 2023 and has already compensated researchers with a sum in excess of $430,000 for the identification of AI vulnerabilities. This shows the company's intention to protect its AI ecosystem on a preventative schedule.
Share
Share
Copy Link
Google has introduced a new AI Vulnerability Reward Program, offering substantial rewards for identifying security flaws in its AI products. This initiative aims to enhance the security of Google's AI systems and clarify the scope of reportable issues.
Google has launched a new bug bounty program specifically targeting security flaws and vulnerabilities in its artificial intelligence (AI) products. This initiative, an extension of the company's existing Abuse Vulnerability Reward Program (VRP), aims to incentivize researchers and bug bounty hunters to identify and report high-impact security issues in Google's AI systems
1
2
.Source: TechRadar
The program covers a range of Google's AI-powered products, including flagship offerings such as Google Search, Gemini Apps, and core Google Workspace applications like Gmail and Drive. Other in-scope products include AI Studio, Jules, and various AI integrations across Google's product suite
3
.Source: Dataconomy
Rewards for identified vulnerabilities can reach up to $30,000 for high-quality reports with novelty bonus multipliers. The standard top bounty for security bugs that could trigger rogue actions in a flagship product is set at $20,000. Other significant rewards include $15,000 for sensitive data exfiltration bugs and up to $5,000 for phishing enablement and model theft issues
3
5
.Google has categorized potentially acceptable reports into several areas:
1
2
.3
.1
.2
3
.The company has provided examples of qualifying bugs, such as indirectly injecting an AI prompt that causes Google Home to unlock a door or a data exfiltration prompt that summarizes and sends someone's emails to an attacker's account
2
.Google has been careful to delineate what's not covered by the program. Content-based issues, including AI hallucinations, generating hate speech, or copyright-infringing content, are explicitly excluded from the VRP. These should instead be reported through in-product feedback channels
2
5
.Additionally, jailbreaks and issues found in Vertex AI or other Google Cloud products are not within the scope of this program and should be reported through separate channels
1
5
.Related Stories
Since expanding its bug bounties to include AI-related issues in 2023, Google has awarded more than $430,000 to researchers. The launch of this standalone program is expected to encourage even more reports, which could be crucial as Google continues to integrate AI across its digital product suite
1
3
.Jason Parsons and Zak Bennett, Google's security engineering managers, expressed excitement about the new program, hoping it will foster increased collaboration with AI researchers and enhance the security of Google's AI systems
1
5
.Source: Analytics Insight
As AI technologies become increasingly prevalent, ensuring their security and reliability is paramount. Google's new AI Vulnerability Reward Program represents a significant step in engaging the wider security community to identify and address potential vulnerabilities in AI systems, ultimately contributing to safer and more robust AI products for users worldwide.
Summarized by
Navi
[3]
[4]
05 Aug 2025•Technology
19 Nov 2024•Technology
21 Nov 2024•Technology