5 Sources
5 Sources
[1]
Google ads for shared ChatGPT, Grok guides push macOS infostealer malware
A new AMOS infostealer campaign is abusing Google search ads to lure users into Grok and ChatGPT conversations that appear to offer "helpful" instructions but ultimately lead to installing the AMOS info-stealing malware on macOS. The campaign was first spotted by researchers at cybersecurity company Kaspersky yesterday, while Huntress managed security platform published a more detailed report earlier today. The ClickFix attack begins with victims searching for macOS-related terms, such as maintenance questions, problem-solving, or for Atlas - OpenAI's AI-powered web browser for macOS. Google advertisement link directly to ChatGPT and Grok conversations that had been publicly shared in preparation for the attack. The chats are hosted on the legitimate LLM platforms and contain the malicious instructions used to install the malware. "During our investigation, the Huntress team reproduced these poisoned results across multiple variations of the same question, 'how to clear data on iMac,' 'clear system data on iMac,' 'free up storage on Mac,' confirming this isn't an isolated result but a deliberate, widespread poisoning campaign targeting common troubleshooting queries," Huntress researchers explain. If users fall for the trick and execute the commands from the AI chat in macOS Terminal, a base64-encoded URL decodes into a bash script (update) that loads a fake password prompt dialog. When the password is provided, the script validates, stores, and uses it to execute privileged commands, such as downloading the AMOS infostealer and executing the malware with root-level privileges. AMOS was first documented in April 2023. It is a malware-as-a-service (MaaS) operation that rents the infostealer $1,000/month, targeting macOS systems exclusively. Earlier this year, AMOS added a backdoor module that lets operators execute commands on infected hosts, log key strokes, and drop additional payloads. AMOS is dropped on /Users/$USER/ as a hidden file (.helper). When launched, it scans the applications folder for Ledger Wallet and Trezor Suite. If found, it overwrites them with trojanized versions that prompt the victim to enter their seed phrase "for security" reasons. AMOS also targets cryptocurrency wallets from Electrum, Exodus, MetaMask, Ledger Live, Coinbase Wallet, and others; browser data such as cookies, saved passwords, autofill data, and session tokens; macOS Keychain data such as app passwords and Wi-Fi credentials; and files on the filesystem. Persistence is achieved via a LaunchDaemon (com.finder.helper.plist) running a hidden AppleScript which acts as a watchdog loop, restarting the malware within one second if terminated. These latest ClickFix attacks are yet another example of threat actors experimenting with new ways to exploit legitimate, popular platforms like OpenAI and X. Users need to be vigilant and avoid executing commands they found online, especially if they don't fully understand what they do. Kaspersky noted that, even after reaching these manipulated LLM conversations, a simple follow-up question asking ChatGPT if the provided instructions are safe to execute reveals that they aren't.
[2]
Hackers tricked ChatGPT, Grok and Google into helping them install malware
Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware. The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results. Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and -- lacking the training to see that the advice was hostile -- executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector. As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are Google and ChatGPT, which they've either used before or heard about nonstop for the last several years. They're primed to trust what those sources tell them. Even worse, while the link to the ChatGPT conversation has since been taken off Google, it was up for at least half a day after Huntress published their blog post. This news comes at a time that's already fraught for both AIs. Grok has been getting dunked on for sucking up to Elon Musk in despicable ways, while ChatGPT creator OpenAI has been falling behind the competition. It's not yet clear if the attack can be replicated with other chatbots, but for now, I strongly recommend using caution. Alongside your other common-sense cybersecurity steps, make sure to never paste anything into your command terminal or your browser URL bar if you aren't certain of what it will do.
[3]
Attackers using ChatGPT to trick Mac users into installing MacStealer
Security researchers have found that attackers are using ChatGPT to trick Mac users into pasting a command line into Terminal which installs malware. Specifically, it installs MacStealer, which allows the attacker to obtain iCloud passwords, files, and credit card details. The attack targeted people who were searching Google for instructions on how to free up some disk space on a Mac ... Engadget's Sam Chapman says he has been following the growing trend of using AI to find new ways to implement old-school scams when he spotted the report from cybersecurity company Huntress. Hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware. The attackers held a conversation with ChatGPT in which they introduced the Terminal command, made the chat public, and then paid Google to promote the link. Huntress said this made it appear at the top of Google search results for freeing up disk space on a Mac. The victim had searched "Clear disk space on macOS." Google surfaced two highly ranked results at the top of the page, one directing the end user to a ChatGPT conversation and the other to a Grok conversation. Both were hosted on their respective legitimate platforms. Both conversations offered polite, step-by-step troubleshooting guidance. Both included instructions, and macOS Terminal commands presented as "safe system cleanup" instructions. The user clicked the ChatGPT link, read through the conversation, and executed the provided command. They believed they were following advice from a trusted AI assistant, delivered through a legitimate platform, surfaced by a search engine they use every day. Instead, they had just executed a command that downloaded an AMOS stealer variant that silently harvested their password, escalated to root, and deployed persistent malware. The same was done with X's Grok chatbot. Search terms targeted were: It's a worryingly clever approach because it bypasses all of the built-in macOS protections, allowing the user to install the malware with no warnings. It exploits the fact that people trust the well-known brands of both Google and ChatGPT. Pasting commands into Terminal without understanding them is a dangerous thing to do at the best of times. If you do it at all, you should ensure that you absolutely trust the source. Sponsored results in Google are not at all trustworthy. It would be extremely easy for a non-technical user to fall for this, so you might want to alert your family and friends.
[4]
New MacOS malware exploits trusted AI and search tools
Campaign abused Google ads and trusted AI platforms, boosting credibility and infection success AtomicOS (AMOS) criminals are using a combination of malvertising and GenAI response poisoning to trick MacOS users into downloading malware. This is according to cybersecurity researchers Huntress, who claim not only to have observed the attacks in the wild, but to have replicated the same results as other victims, as well. In a blog post published earlier this week, Huntress said that AMOS maintainers first created two AI conversations: one with ChatGPT, and one with Grok. These conversations were about freeing up disk space on a MacOS device, and included instructions on how to do it. The instructions are fake, though, and instead tell the user to bring up the Terminal app and type in a command that downloads and runs the AMOS infostealer. From there, they purchased ad space on Google in order to promote these conversations. That way, when a user searches something like "how to clear disk space on MacOS", these poisoned conversations would be displayed at the very top of the search engine results page. Apparently, the trick worked, because Huntress was brought in to investigate a case of AMOS infections. For those who are unaware, AMOS is an infamous MacOS infostealer, capable of stealing sensitive data, passwords, cryptocurrency wallet information, and more. The scam works similarly to ClickFix, another technique that tricks victims into running Terminal commands. The only difference is that in this case, the victims are actually proactively searching for a solution to a real problem, rather than to a non-existent one. What makes this campaign more dangerous, is that it abuses not one, but three trusted services - Google's search engine, ChatGPT, and Grok's responses. At the end of the day, both of the conversations are hosted on their respective platforms, increasing the perceived legitimacy of both instructions. It is unclear how AMOS operators managed to get ChatGPT and Grok to display these results, though. Via Apple Insider
[5]
Hackers trick ChatGPT and Grok to install malware onto devices: Report
During this chat, they prompt the AI to suggest entering a specific command in a computer's terminal. Cybercriminals have found a worrying way to use AI tools to spread malware on computer, and they are doing it by taking advantage of Google search results. According to Huntress, hackers are using AI chats to plant harmful instructions that show up at the top of common search queries, tricking people into running dangerous commands on their own computers. Here's how the scheme works. Attackers start a conversation with an AI assistant, such as ChatGPT or Grok, about a popular search topic. During this chat, they prompt the AI to suggest entering a specific command in a computer's terminal. That command is actually designed to give the hacker access to the victim's system. The attacker then makes the AI conversation public and pays to boost it so that it appears high in Google search results. When users search for that same topic, the harmful instructions appear like helpful advice. Also read: Android users can now share live video in emergencies, but there's a catch Huntress explains that this method already led to a real-world infection involving a Mac-targeting malware called AMOS. In that case, a Mac user simply searched "clear disk space on Mac," clicked a sponsored ChatGPT link in Google, and followed the terminal command shown in the AI chat. Running the command allowed hackers to secretly install the AMOS malware. It's important to note that harmful ChatGPT conversation stayed visible in Google search results for at least half a day after Huntress publicly reported the issue. Also read: US attorneys general warn OpenAI, Google and other AI giants to fix delusional chatbot outputs What makes this technique especially dangerous is that it avoids the usual warning signs of online scams. Victims do not have to download anything suspicious or click a strange link. For now, a simple rule can prevent major damage: never paste a command into your computer's terminal or browser bar unless you fully understand what it will do.
Share
Share
Copy Link
Cybercriminals are weaponizing trust in Google Ads and AI chatbots like ChatGPT and Grok to spread macOS malware. Attackers create public AI conversations with malicious Terminal commands, then pay to boost them in search results. When Mac users search for common troubleshooting queries, they unknowingly install AMOS infostealer malware that harvests passwords, cryptocurrency wallets, and sensitive data.
Cybercriminals have developed a sophisticated attack vector that exploits the trust users place in AI chatbots and search engines to spread macOS malware. Researchers at Huntress and Kaspersky identified a campaign where threat actors create seemingly legitimate conversations with ChatGPT and Grok, then use Google Ads to promote malicious links that appear at the top of search results for common Mac troubleshooting queries
1
2
. The attack demonstrates how malvertising combined with response poisoning can bypass traditional security awareness training.
Source: Engadget
The campaign specifically targets users searching for terms like "how to clear data on iMac," "clear system data on iMac," and "free up storage on Mac"
1
. Huntress researchers reproduced these poisoned results across multiple variations, confirming this represents a deliberate, widespread campaign rather than isolated incidents1
.The attack methodology reveals careful planning. Threat actors first create public conversations on legitimate platforms where they prompt AI assistants to suggest pasting malicious Terminal commands disguised as system maintenance instructions
4
. These conversations are hosted on ChatGPT and Grok's official platforms, lending them perceived legitimacy. Attackers then purchase ad space to promote these conversations, ensuring they appear prominently when users search for help4
.
Source: 9to5Mac
When victims execute the command line instructions from these AI conversations, a base64-encoded URL decodes into a bash script that loads a fake password prompt dialog
1
. After the user provides their password, the script validates, stores, and uses it to execute privileged commands, downloading the AMOS infostealer and executing the malware with root-level privileges1
.
Source: Digit
The AMOS infostealer, first documented in April 2023, operates as a malware-as-a-service offering that rents for $1,000 per month and exclusively targets macOS systems
1
. Earlier this year, AMOS added a backdoor module enabling operators to execute commands on infected hosts, log keystrokes, and deploy additional payloads1
.When launched, AMOS drops on /Users/$USER/ as a hidden file (.helper) and scans for Ledger Wallet and Trezor Suite applications
1
. If found, it overwrites them with trojanized versions prompting victims to enter their seed phrase for supposed security reasons. The MacStealer variant also targets cryptocurrency wallets from Electrum, Exodus, MetaMask, Ledger Live, and Coinbase Wallet, along with browser data including cookies, saved passwords, autofill data, and session tokens1
3
. It harvests macOS Keychain data containing app passwords and Wi-Fi credentials, plus files across the filesystem1
.Related Stories
What makes this campaign particularly dangerous is how it circumvents traditional cybersecurity defenses. As Engadget notes, the attack bypasses almost all red flags users have been taught to recognize
2
. Victims don't download suspicious files, install questionable executables, or click shady links. Instead, they only need to trust Google and ChatGPT—platforms they've either used before or heard about constantly2
.This approach represents an evolution of ClickFix techniques, where victims are tricked into running malicious Terminal commands
4
. However, this campaign proves more effective because victims actively search for solutions to real problems rather than fabricated ones4
. The data exfiltration occurs silently after users grant root access through what appears to be legitimate system maintenance3
.The harmful ChatGPT conversation remained visible in Google search results for at least half a day after Huntress published their findings
2
. This delayed response highlights challenges in moderating content that exploits legitimate platforms. Kaspersky researchers noted that even after reaching these manipulated conversations, asking ChatGPT whether the provided instructions are safe to execute reveals they aren't1
.For users, the fundamental rule remains unchanged: never paste commands into Terminal or browser URL bars without fully understanding their function
2
5
. Sponsored results in Google searches should not be considered inherently trustworthy3
. As OpenAI and other AI platforms face mounting competition and scrutiny, this incident underscores how threat actors continuously experiment with new methods to exploit trusted services1
. Watch for similar campaigns targeting other operating systems as attackers refine these techniques.Summarized by
Navi
[1]
1
Technology

2
Technology

3
Policy and Regulation
