13 Sources
[1]
Google Gemini Bug Turns Gmail Summaries into Phishing Attack
If you rely on Google's Gemini chatbot to summarize your incoming emails, be careful: The technology can also be abused to deliver phishing attacks, according to new security research. Gemini can automatically post the summaries in Gmail, giving you a convenient breakdown of all the main points from an email. The problem is that a malicious email containing hidden instructions in the text can also dupe Google's AI into turning the same summary into a phishing message. As BleepingComputer reports, the flaw can trick Gemini into displaying a fake warning in the email summary, claiming the user's Gmail password has been compromised while urging them to call a fake Google phone number to fix the problem. Mozilla's bug bounty program for AI services, 0DIN, disclosed the potential vulnerability, which affects the Gemini email summary feature for Workspace users. In its report, 0DIN demonstrated how attackers can embed hidden prompts in emails to manipulate Gemini's output. One example showed an instruction formatted like this: To evade detection by the user, the prompt can be hidden by setting the font size to zero and coloring the text white -- making it invisible in the email body, but still readable by Gemini. The result caused Gemini to "faithfully obey" and attach the fake Gmail password threat into the email summary, according to 0DIN's report. Of course, many users might not fall for the attack, especially if they ignore the Gemini-generated summary or inspect the email closely. Still, the research demonstrates how AI-generated previews can be hijacked for nefarious purposes. However, Google told PCMag: "We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks." It also looks like the company has patched the threat since we weren't able to replicate the attack. In addition, Google noted it hasn't encountered cybercriminals using the 0DIN-disclosed specific method in active attacks to phish users. Last month, the company also published a blog post about it's ongoing efforts to stop "prompt injection" attacks on its AI services.
[2]
Google Gemini flaw hijacks email summaries for phishing
Google Gemini for Workspace can be exploited to generate email summaries that appear legitimate but include malicious instructions or warnings that direct users to phishing sites without using attachments or direct links. Such an attack leverages indirect prompt injections that are hidden inside an email and obeyed by Gemini when generating the message summary. Despite similar prompt attacks being reported since 2024 and safeguards being implemented to block misleading responses, the technique remains successful. A prompt-injection attack on Google's Gemini model was disclosed through 0din, Mozilla's bug bounty program for generative AI tools, by researcher Marco Figueroa, GenAI Bug Bounty Programs Manager at Mozilla. The process involves creating an email with an invisible directive for Gemini. An attacker can hide the malicious instruction in the body text at the end of the message using HTML and CSS that sets the font size to zero and its color to white. The malicious instruction will not be rendered in Gmail, and because there are no attachments or links present, the message is highly likely to reach the potential target's inbox. If the recipient opens the email and asks Gemini to generate a summary of the email, Google's AI tool will parse the invisible directive and obey it. An example provided by Figueroa shows Gemini following the hidden instruction and includes a security warning about the user's Gmail password being compromised, along with a support phone number. As many users are likely to trust Gemini's output as part of Google Workspace functionality, chances are high for this alert to be considered a legitimate warning instead of a malicious injection. Figueroa offers a few detections and mitigation methods that security teams can apply to prevent such attacks. One way is to remove, neutralize, or ignore content that is styled to be hidden in the body text. Another approach is to implement a post-processing filter that scans Gemini output for urgent messages, URLs, or phone numbers, flagging the message for further review. Users should also be aware that Gemini summaries should not be considered authoritative when it comes to security alerts. BleepingComputer has contacted Google to ask about defenses that prevent or mitigate such attacks, and a spokesperson directed us to a Google blog post on security measures against prompt injection attacks. "We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks," a Google spokesperson told BleepingComputer. The company representative clarified to BleepingComputer that some of the mitigations are in the process of being implemented or are about to be deployed. Google has seen no evidence of incidents manipulating Gemini in the way demonstrated in Figueroa's report, the spokesperson said.
[3]
Google Gemini vulnerable to a stupidly easy prompt injection attack in Gmail AI summaries
Serving tech enthusiasts for over 25 years. TechSpot means tech analysis and advice you can trust. AI first, security later: As GenAI tools make their way into mainstream apps and workflows, serious concerns are mounting about their real-world safety. Far from boosting productivity, these systems are increasingly being exploited - benefiting cybercriminals and cost-cutting executives far more than end users. Researchers this week uncovered how Google's Gemini model used in Gmail can be subverted in an incredibly simple way, making phishing campaigns easier than ever. Mozilla recently unveiled a new prompt injection attack against Google Gemini for Workspace, which can be abused to turn AI summaries in Gmail messages into an effective phishing operation. Researcher Marco Figueroa described the attack on 0din, Mozilla's bug bounty program for generative AI services. We strongly recommend reading the full report if you still think GenAI technology is ready for deployment in production or live, customer-facing products. Like many other Gemini-powered services, the AI summary feature was recently forced onto Gmail users as a supposedly powerful new workflow enhancement. The "summarize this email" option is meant to provide a quick overview of selected messages - though its behavior depends heavily on Gemini's whims. Originally introduced as an optional feature, the summary tool is now baked into the Gmail mobile app and functions without user intervention. The newly disclosed prompt injection attack exploits the autonomous nature of these summaries - and the fact that Gemini will "faithfully" follow any hidden prompt-based instructions. Attackers can use simple HTML and CSS to hide malicious prompts in email bodies by setting them to zero font size and white text color, rendering them essentially invisible to users. This is somewhat similar to a story we reported on this week, about researchers hiding prompts in academic papers to manipulate AI peer reviews. Using this method, researchers crafted an apparently legitimate warning about a compromised Gmail account, urging the user to call a phone number and provide a reference code. According to 0din's analysis, this type of attack is considered "moderate" risk, as it still requires active user interaction. However, a successful phishing campaign could lead to serious consequences by harvesting credentials through voice-phishing. Even more concerning, the same technique can be applied to exploit Gemini's AI in Docs, Slides, and Drive search. Newsletters, automated ticketing emails, and other mass-distributed messages could turn a single compromised SaaS account into thousands of phishing beacons, the researchers warn. Figueroa described prompt injections as "the new email macros," noting that the perceived trustworthiness of AI-generated summaries only makes the threat more severe. In response to the disclosure, Google said it is currently implementing a multi-layered security approach to address this type of prompt injection across Gemini's infrastructure.
[4]
Gmail AI summaries can be hijacked for phishing scams
Google is trying to shove its "AI" into all of its products at once. You can't use Search, Android, or Chrome without being prompted to try out some flavor of Gemini. But maybe wait a bit before you let Google's large language model summarize your Gmail messages... because apparently it's easy to get it passing along phishing attempts. Google Gemini for Workspace includes a feature that summarizes the text in an email, using the Gmail interface, but not necessarily an actual Gmail address. A vulnerability submitted to Mozilla's 0din AI bug bounty program (spotted by BleepingComputer) found an easy way to game that system: just hide some text at the end of an email with white font on a white background so it's essentially invisible to the reader. The lack of links or attachments means it won't trigger the usual spam protections. And you can probably guess what comes next. Instructions in that "invisible" text cue the Gemini auto-generated summary to alert the user that their password has been compromised and that they should call a certain phone number to reset it. In this hypothetical scenario, there's an identity thief waiting on the other end of the line, ready to steal your email account and any other information that might be connected to it. A hidden "Admin" tag in the text can make sure that Gemini will include the text verbatim in the summary. It's important to note that this is only a theoretical attack at the moment, and it hasn't been seen "in the wild" at the time of writing. The Gemini "Summarize this email" feature is currently only available to Workspace accounts, not the general public. (I imagine flipping that switch for a billion or two basic Gmail users might overtax even the big iron in Google's mighty data centers.) But the ease with which users trust text generated by large language models, even when they appear to be in the midst of a religious delusion or a racist manifesto, is concerning to say the least. Spammers and hackers are already using LLMs and adjacent tools to spread their influence more efficiently. It seems almost inevitable that as users grow more reliant on AI to replace their work -- and their thinking -- these systems will be more effectively and regularly compromised.
[5]
Google Gemini for Workspace has been exploited to send emails with hidden malicious messages
This prompt injection style attack can get around antivirus scans A flaw in Google Gemini for Workspace can be exploited by hackers to insert malicious instructions that could misdirect the AI tool and cause it to direct users to phishing sites. As reported by Bleeping Computer, this vulnerability works by creating email summaries that look entirely normal, but include malicious instructions or warnings that are hidden and automatically obeyed by Gemini when it generates a message summary. The process works by creating an email that holds an invisible directive for Gemini, by hiding instructions in the body text at the end of the message using HTML and CSS code then setting the font size to zero and the color to white. Since this additional text doesn't include any attachments or links, it won't be flagged or caught by the best antivirus software or email programs so it is likely to make it through to a potential victim's inbox. When a target opens an email, then requests that Gemini summarizes the contents, the AI program will automatically obey the hidden instructions that it sees. Users often put their trust into Gemini's ability to work with content as part of Workspace; the alert is considered a legitimate warning instead of a malicious injection. Similar attacks have been reported over the last year, though safeguards have been implemented in order to block the misleading responses, the technique has remained successful overall which is why it is still in use. Bleeping Computer says that when they asked Google about defenses to counter these types of attacks, a spokesperson referenced a blog post about prompt injection attacks and said that some of the mitigations are in the process of being implemented or are about to be deployed. Google also said it has no evidence that this attack has occurred in the wild. Figueroa, the manager at Mozilla's GenAI Bug Bounty Program who detected the flaw, offers a few ideas to prevent this threat: have security teams remove, naturalize or ignore content styled to be hidden in body text. Alternatively, implement filters that scan Gemini for urgent messages, URLs, phone numbers and flag those for additional review from users. For now though, you just need to be careful when having Gemini summarize your emails as you never know what could be hiding inside them. Hopefully, Google rolls out a fix for this new type of attack sooner rather than later.
[6]
Google Gemini can be hijacked to display fake email summaries in phishing scams
Businesses should make sure invisible text is not processed by the AI Cybercriminals have found a creative new way to abuse Google's Generative Artificial Intelligence (GenAI) to steal people's Gmail accounts. Google introduced Gemini, its AI-powered chatbot assistant into its Workspace suite of productivity apps some time ago, and one of the things Gemini can do is summarize incoming emails - so when a person receives an email, they can bring up a vertical pane on the right-hand side of the screen, asking Gemini for assistance with different things, such as bringing up vital email information, adding calendar entries, and more. However experts have warned this also opens up the Gmail accounts for so-called "prompt-injection" attacks - so if the incoming email message contains a hidden prompt for Gemini, it can be executed in the pane. According to security researcher Marco Figueroa, this is exactly what the email provider is now susceptible to. By using HTML and CSS, threat actors can add a prompt for Gemini, with its font size set to zero, and its color to white. Therefore, the victim will not be able to see it, but Gemini will act on it. If that prompt makes Gemini display a phishing message, it will do just that, and since the message would come from a trusted source, it increases the chances of success. Figueroa showed how a malicious prompt could notify the victim that their email account has been compromised, and that they need to "call" Google on a phone number displayed in the message to resolve the issue. To protect against future prompt injection attacks, companies should make sure their email clients remove, neutralize, or ignore content that is styled to be hidden in the body text. Furthermore, they could include a post-processing filter that scans the inbox for "urgent messages", URLs, or phone numbers. Finally, businesses should educate their employees that summaries provided by the Gemini tool should not be a replacement for security alerts.
[7]
This Google Gemini Flaw Can Create Malicious Gmail AI Summaries
AI summaries are meant to make life easier: They're meant to truncate a large amount of text into something you can quickly scan, so you can spend time on more pressing matters. The trouble is, you can't always trust these summaries. Usually, that's because AI hallucinates, and incorrectly summaries the text. In other cases, the summaries might actually be compromised by hackers. In fact, that's what happening with Gemini, Google's proprietary AI, with Workspace. Like other generative AI models, Gemini can summarize emails in Gmail. However, as reported by BleepingComputer, the tech is vulnerable to exploitation. Hackers can inject these summaries with malicious information that pushes those users towards Here's how it works: A bad actor creates an email with invisible text inside of it, utilizing HTML and CSS and manipulating the font size and color. You won't see this part of the message, but Gemini will. Because the hackers know not to use links or attachments, items that might flag Google's spam filters, the message has a high chance of landing in the user's inbox. So, you open the email, and notice nothing out of the blue. But it's long, so you choose to have Gemini summarize it for you. While the top of the summary likely is focused on the visible message, the end will summarize the hidden text. In one example, the invisible text instructed Gemini to produce an alert, warning the user that their Gmail password was compromised. It then highlighted a phone number to call for "support." This type of malicious activity is particularly dangerous. I can see how someone using Gemini believes a warning like this, especially if they already take the AI summaries at face value. Without knowing how the scam works, it seems like an official output from Gemini, like Google engineered its AI to warn users when their passwords were compromised. Google did respond to a request for comment from BleepingComputer; iIt claims it has not seen evidence of Gemini manipulation in this way, and referred the outlet to a blog post on how it fight against prompt injection attacks. A representative shared the following message: "We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks." It confirmed some tactics are about to be deployed. The security researcher that discovered the flaw, Marco Figueroa, has some advice for security teams to combat this vulnerability. Figueroa recommends removing text designed to be hidden from the user, and running a filter that scans Gemini's outputs for anything suspicious, like links, phone numbers, or warnings. As a Workspace end user, however, you can't do much with that advice. But you don't need to, now that you know what to look for. If you use Gemini's AI summaries, be deeply skeptical of any urgent messages contained within -- especially if those warnings have nothing to do with the email itself. Sure, you might receive a legitimate email warning you about a data breach, and, as such, an AI-generated summary will tell you the same. But if the summary says the email in question is about an event happening in your city next week, and at the bottom of the summary you see a warning about your Gmail password being compromised, you can safely assume you're being messed with. Like other phishing schemes, the warning itself might have red flags. In the example highlighted by BleepingComputer, Gmail is spelled "GMail." If you're not familiar with how Gmail is formatted, that might not stick out to you, but look for other inconsistencies and mistakes. Google also has no direct phone number to call for support issues. If you've ever tried to contact the company, you'll know there's virtually zero way to get in touch with a real person. Beyond this phishing scheme, you should be skeptical of AI summaries. That's not to say they should be avoided entirely -- they can be helpful -- but AI summaries are fallible, if not prone to failure. If the email you're reading is important, I would suggest avoiding the summaries feature, or at least taking a scan of the original text to make sure the summary did get it right.
[8]
Hackers hunt your emails with Google Gemini
A prompt-injection vulnerability in Google Gemini for Workspace was disclosed, enabling the generation of seemingly legitimate email summaries that can direct users to phishing sites via hidden instructions. This method circumvents traditional detection by avoiding attachments or direct links. This attack vector utilizes indirect prompt injections embedded within an email, which Gemini's summary generation process then obeys. Despite similar prompt injection attacks being reported since 2024 and Google's implementation of safeguards designed to block misleading responses, this specific technique has demonstrated continued success. The vulnerability was publicly revealed through 0din, Mozilla's bug bounty program dedicated to generative AI tools. Marco Figueroa, GenAI Bug Bounty Programs Manager at Mozilla, was responsible for the disclosure. The attack mechanism involves crafting an email that contains an invisible directive specifically intended for Gemini. An attacker can conceal this malicious instruction within the email's body text by applying HTML and CSS styling that sets the font size to zero and the font color to white. This renders the instruction imperceptible to the human eye when the email is viewed in Gmail. Crucially, because the email contains neither attachments nor direct links, it is highly probable that such a message will successfully bypass email security filters and reach the intended recipient's inbox without being flagged. Google simplifies Lens to make room for its Gemini AI Should a recipient open this email and subsequently use Google Gemini to generate a summary of its content, Google's AI tool will process the hidden, invisible directive. Consequently, Gemini will then obey this concealed instruction as part of its summary generation. Figueroa provided an example demonstrating this exploit: Gemini followed the embedded instruction and produced a security warning for the user, falsely stating that their Gmail password had been compromised, and included a fabricated support phone number. Given that many users are likely to place trust in Gemini's output as an integral function of Google Workspace, there is a high probability that this generated alert would be perceived as a legitimate security warning rather than a malicious injection, potentially leading users to contact the fraudulent number. In response to this vulnerability, Figueroa has outlined several detection and mitigation strategies that security teams can implement. One recommended approach involves developing systems to remove, neutralize, or entirely disregard content within the email body that is styled to be hidden. An alternative method proposed is to employ a post-processing filter that actively scans Gemini's generated output for specific indicators, such as urgent messages, unrecognized URLs, or suspicious phone numbers. Such a filter would then flag the summary for further review, preventing potentially malicious instructions from reaching the user unchallenged. Additionally, users are advised to exercise caution and should not consider Gemini summaries as authoritative sources for security alerts. BleepingComputer contacted Google for information regarding defenses against such attacks. A Google spokesperson directed BleepingComputer to a Google blog post detailing security measures against prompt injection attacks. The spokesperson stated, "We are constantly hardening our already robust defenses through red-teaming exercises that train our models to defend against these types of adversarial attacks." The company representative further clarified that some of these mitigations are currently in the process of being implemented or are scheduled for deployment soon. Google has reported no evidence of incidents manipulating Gemini in the manner demonstrated in Figueroa's report.
[9]
Gemini flaw lets attackers hijack email summaries for phishing - Phandroid
A new flaw in Google Gemini for Workspace could open the door to phishing attacks, using AI-generated email summaries to deceive users. The issue was disclosed through Odin, Mozilla's bug bounty program for generative AI tools. Security researcher Marco Figueroa, who leads Mozilla's GenAI bug bounty efforts, found that attackers can embed invisible directives inside an email. These hidden instructions are written in HTML and CSS, using white text and a font size of zero. They are invisible to the recipient, and Gmail doesn't flag them as spam since there are no links or attachments. The trick works like this. An attacker sends an email that looks harmless. But if the recipient uses Gemini to summarize the email, the AI will parse the invisible directive and follow it. That summary can then include misleading warnings or even phishing links that were never present in the original message. This is a type of prompt injection attack. While Google has implemented safeguards, this method appears to bypass them. Similar techniques were reported last year, but clearly, the risk hasn't gone away. When BleepingComputer reached out to Google for a comment, the company said it has seen no real-world cases of this attack being used. A spokesperson pointed to a blog post about Gemini's security defenses and said the company continues to conduct red-teaming exercises to strengthen its systems. Still, the fact that Gemini can be manipulated this way shows how fragile AI systems remain. What looks like a helpful feature could easily turn into a security liability.
[10]
Gemini in Gmail Can Be Hacked to Send Phishing Messages, Researcher Finds
Gemini is said to treat admin commands with a higher priority Gemini in Gmail is vulnerable to prompt injection-based phishing attacks, a researcher demonstrated. As per the researcher, the artificial intelligence (AI) chatbot that offers features such as email summary generation and email rewriting can be manipulated into displaying phishing messages to users. This vulnerability poses a significant risk, as attackers could potentially exploit it to conduct online scams. Meanwhile, the Mountain View-based tech giant has reportedly said that it has so far not seen this manipulation technique used against users. The vulnerability was spotted and demonstrated by researcher Marco Figueroa, GenAI Bug Bounty Programmes Manager at Mozilla, via Mozilla's bug bounty programme for AI tools, 0din. Interestingly, to trigger this vulnerability, the scammer does not have to pull off any high-profile cyber heist. Instead, it can be carried out with a simple text command using a technique known as prompt injection. Prompt injection is a type of attack on AI chatbots where an attacker deliberately manipulates the input or prompt to make the model behave in unintended or malicious ways. In this particular scenario, the researcher used indirect prompt injection, where the malicious prompt is embedded inside a document, email, or a web page. As per the researcher, he simply wrote a long email and added some hidden text at the end, which contained the prompt injection. The email did not contain any URLs or attachments, which made it easier to reach the receiver's primary inbox. Adding a hidden malicious message in email Photo Credit: 0din/Marco Figueroa As shown in the image, the attacker used a white colour font on a white page to write the malicious message. This text is normally invisible to the receiver of the email. Other ways to add hidden text include using a zero font size, off-screen text placement, and other HTML or CSS tricks. Now, if the receiver uses Gemini's "summarise email" feature, the chatbot will process the hidden text and carry out the command, without the user ever finding out, Figueroa said. He also highlighted that the probability of the chatbot following the command increases if the message is wrapped inside an admin tag, as it considers it a high-priority request. Gemini verbatim repeats the malicious message in the summary Photo Credit: 0din/Marco Figueroa The cybersecurity researcher showed in another screenshot that Gemini indeed carried out the malicious message and displayed it as part of its email summary. Since the message is now coming from Gemini, instead of an email from a likely stranger, the victim could be more likely to believe it and follow the instructions, falling for the scam. BleepingComputer reached out to Google to ask about the vulnerability, and a spokesperson said that the company has seen no evidence of similar manipulation so far. Additionally, it was also highlighted that Google is in the process of implementing some mitigations for prompt injection-based adversarial attacks.
[11]
Urgent warning for 1.8 billion Gmail users: 'Hidden danger' steals passwords in ways even AI can't detect
A "hidden danger," which is stealing passwords, has prompted Google to issue an urgent warning for more than a billion Gmail users. The new type of attack has been flying under the radar, attacking 1.8 billion Gmail users without them even realizing it. As the danger looms over Gmail accounts, users need to make sure they follow the right instructions to combat the malicious activity. According to The Sun, hackers are tricking users into giving their credentials by using Google Gemini, the company's built-in AI tool. According to cybersecurity experts, bad actors are sending emails with concealed instructions that cause Gemini to generate fake phishing warnings. These tricks deceive users into giving away personal account information or visiting harmful websites. The emails are typically crafted to seem urgent and sometimes appear to come from a business. Hackers will construct these emails by setting the font size to zero and the text color to white before inserting prompts invisible to users but picked up by Gemini, The Sun reported. GenAI bounty manager Marco Figueroa showed how a dangerous prompt could make users receive a false alert claiming their email account was compromised. These warnings would prompt victims to call a fake "Google support" phone number to resolve the issue. Experts have given multiple recommendations to users to help them fight these prompt injection attacks by acting immediately. The first suggestion asks the companies to configure email clients to detect and neutralize hidden content in message bodies. This move can help counter hackers sending invisible text within emails. Security experts also advised users to use post-processing filters to scan inboxes for things like "urgent messages," URLs, or phone numbers. This step can strengthen defenses against threats. The scam came to light following research led by Mozilla's 0Din security team, which showed proof of one of the hostile attacks last week. The report explained how hackers tricked Gemini into displaying a fake security alert. It warned users their password had been stolen, but the message was fake and designed to steal their information. The trick works by hiding a secret size-zero font prompt in white text that matches the email background. So when someone clicks "summarize this email" using Gemini, the tool reads the hidden message, not just the visible bit. This type of manipulation is called "indirect prompt injection," and it takes advantage of AI's inability to tell the difference between a user's question and a hacker's embedded message. AI can't distinguish between the two, since both simply look like text, and it will usually follow whichever appears first, even if it's malicious. Since Google has yet to patch this way of scamming victims, hackers can still exploit this technique. Sneaking in commands that the AI might follow will remain an effective way to leak sensitive data until users are properly protected from the threat. AI is also integrated into Google Docs, Calendar, and third-party apps, increasing the potential risk. Google has reminded users during this scamming crisis that it does not send security alerts through Gemini summaries.
[12]
Google AI Chatbot Target of Potential Phishing Attacks | PYMNTS.com
At issue is a prompt-injection flaw that allows cybercriminals to design phishing or vishing campaigns by creating messages that appear to be legitimate Google security warnings, the report said. Fraudsters can embed malicious prompt instructions into an email with "admin" instructions. If a recipient clicks "Summarize this email," Gemini treats the hidden admin prompt as its top priority and carries it out. "Because the injected text is rendered in white-on-white (or otherwise hidden), the victim never sees the instruction in the original message, only the fabricated 'security alert' in the AI-generated summary," 0din researcher Marco Figueroa wrote in a company blog post. For example, researchers in a proof of concept embedded an invisible prompt inside an email that included a warning that the reader's Gmail password had been compromised with a number to call, according to the Dark Readings report. Once the user sees the message, they could call the phone number and become a victim of credential harvesting. Google last month discussed some of the defenses it is using to block prompt injection-style attacks in a company blog post. A spokesperson for the tech giant told Dark Reading that Google is in "mid-deployment on several of these updated defenses," per the report. Meanwhile, "cyber threats are becoming more varied and insidious," PYMNTS wrote Monday (July 14), following the revelation of a breach of a McDonald's AI hiring chatbot that exposed the personal information of 64 million job applicants. "For decades, enterprise cybersecurity strategies revolved around the notion of a clearly defined perimeter: secure what's inside, keep the bad actors out," the report said. "But cloud adoption, hybrid work, third-party tools, and bring-your-own-device (BYOD) policies have fragmented that perimeter into a patchwork of distributed endpoints and unseen attack vectors." The McDonald's breach shows that while companies invest in next-generation technologies, many still get tripped up by the mistakes of yesteryear. "In this case, the most avoidable mistake of all -- using a default password -- opened the door," PYMNTS wrote, as the fast-food giant chose "123456" as its password.
[13]
Gmail users beware! Scammers are using Gemini to steal your password, here's how
Experts urge use of email filters and staff training; Google says it's actively strengthening defences against prompt injection threats. Security experts have discovered a new Gmail scam that exploits Gemini to steal users' data. The AI tool, which integrates directly into Gmail via a vertical sidebar, assists users by summarising emails, creating calendar entries and more. However, new research has discovered that cyber attackers can exploit Gemini through "prompt injection." According to Cybersecurity expert Marco Figueroa, attackers are using hidden prompts to trick Gemini into generating fake phishing alerts. Notably, around 1.8 billion users have been weaned of this scam. Here's how this new Google Gemini scam works and how you can stay safe. As reported, cybercriminals are sending hidden prompts using HTML and CSS within emails that appear to be from trusted sources. These hidden prompts reportedly come with zero font size and are white in colour to stay invisible to users. As the user opens the email and asks Gemini to summarise it, the AI tool is tricked into executing the hidden prompt. Cybersecurity expert Marco Figueroa shared that a hidden prompt instructs Gemini to display a warning claiming the recipient's Gmail account has been compromised. It then prompts the user to call a fraudulent customer support number, providing scammers with direct access to sensitive account details. Also read: Apple iPhone 17 Pro, iPhone 17 Pro Max to get scratch-resistant, anti-reflective display: Here's what we know To stay safe from such attacks, experts recommend that Google introduce filters that remove or neutralise hidden content within emails. They also suggest using post-processing tools to flag suspicious summaries and ask users not to rely on AI-generated alerts for security decisions. Meanwhile, TechRadar has quoted a Google spokesperson as saying, "We are constantly hardening our already robust defences through red-teaming exercises that train our models to defend against these types of adversarial attacks." The company has also published a blog detailing its current countermeasures against prompt injection threats.
Share
Copy Link
A security flaw in Google Gemini for Workspace allows attackers to manipulate AI-generated email summaries, potentially turning them into phishing tools. This vulnerability highlights the growing concerns about AI safety in mainstream applications.
Researchers have uncovered a significant security flaw in Google Gemini for Workspace, specifically affecting its email summary feature in Gmail. This vulnerability allows attackers to manipulate AI-generated summaries, potentially turning them into sophisticated phishing tools 1. The discovery was made through Mozilla's bug bounty program for AI services, 0DIN, highlighting the growing concerns about AI safety in mainstream applications 2.
Source: PC Magazine
The attack leverages a technique known as "prompt injection," where hidden instructions are embedded within an email's body text. These instructions are invisible to the user but are processed by Gemini when generating email summaries. Attackers can achieve this by:
When a user requests Gemini to summarize the email, the AI faithfully follows the hidden instructions, potentially generating fake security warnings or phishing messages within the summary.
This vulnerability is particularly concerning because:
Source: PCWorld
Marco Figueroa, the researcher who disclosed the flaw, described prompt injections as "the new email macros," emphasizing the severity of the threat due to the perceived trustworthiness of AI-generated content 2.
Google has acknowledged the issue and stated that they are actively working on addressing it. The company's response includes:
Google also emphasized that they have not seen evidence of this specific method being used in active attacks against users.
Source: TechSpot
To mitigate the risks associated with this vulnerability, experts suggest:
As AI technologies continue to be integrated into everyday applications, this incident serves as a reminder of the importance of robust security measures and ongoing vigilance in the face of evolving cyber threats.
Google launches its new Pixel 10 smartphone series, showcasing advanced AI capabilities powered by Gemini, aiming to challenge competitors in the premium handset market.
20 Sources
Technology
2 hrs ago
20 Sources
Technology
2 hrs ago
Google's Pixel 10 series introduces groundbreaking AI features, including Magic Cue, Camera Coach, and Voice Translate, powered by the new Tensor G5 chip and Gemini Nano model.
12 Sources
Technology
3 hrs ago
12 Sources
Technology
3 hrs ago
NASA and IBM have developed Surya, an open-source AI model that can predict solar flares and space weather with improved accuracy, potentially helping to protect Earth's infrastructure from solar storm damage.
6 Sources
Technology
10 hrs ago
6 Sources
Technology
10 hrs ago
Google's latest smartwatch, the Pixel Watch 4, introduces significant upgrades including a curved display, enhanced AI features, and improved health tracking capabilities.
17 Sources
Technology
2 hrs ago
17 Sources
Technology
2 hrs ago
FieldAI, a robotics startup, has raised $405 million to develop "foundational embodied AI models" for various robot types. The company's innovative approach integrates physics principles into AI, enabling safer and more adaptable robot operations across diverse environments.
7 Sources
Technology
2 hrs ago
7 Sources
Technology
2 hrs ago