7 Sources
7 Sources
[1]
Google ads for shared ChatGPT, Grok guides push macOS infostealer malware
A new AMOS infostealer campaign is abusing Google search ads to lure users into Grok and ChatGPT conversations that appear to offer "helpful" instructions but ultimately lead to installing the AMOS info-stealing malware on macOS. The campaign was first spotted by researchers at cybersecurity company Kaspersky yesterday, while Huntress managed security platform published a more detailed report earlier today. The ClickFix attack begins with victims searching for macOS-related terms, such as maintenance questions, problem-solving, or for Atlas - OpenAI's AI-powered web browser for macOS. Google advertisement link directly to ChatGPT and Grok conversations that had been publicly shared in preparation for the attack. The chats are hosted on the legitimate LLM platforms and contain the malicious instructions used to install the malware. "During our investigation, the Huntress team reproduced these poisoned results across multiple variations of the same question, 'how to clear data on iMac,' 'clear system data on iMac,' 'free up storage on Mac,' confirming this isn't an isolated result but a deliberate, widespread poisoning campaign targeting common troubleshooting queries," Huntress researchers explain. If users fall for the trick and execute the commands from the AI chat in macOS Terminal, a base64-encoded URL decodes into a bash script (update) that loads a fake password prompt dialog. When the password is provided, the script validates, stores, and uses it to execute privileged commands, such as downloading the AMOS infostealer and executing the malware with root-level privileges. AMOS was first documented in April 2023. It is a malware-as-a-service (MaaS) operation that rents the infostealer $1,000/month, targeting macOS systems exclusively. Earlier this year, AMOS added a backdoor module that lets operators execute commands on infected hosts, log key strokes, and drop additional payloads. AMOS is dropped on /Users/$USER/ as a hidden file (.helper). When launched, it scans the applications folder for Ledger Wallet and Trezor Suite. If found, it overwrites them with trojanized versions that prompt the victim to enter their seed phrase "for security" reasons. AMOS also targets cryptocurrency wallets from Electrum, Exodus, MetaMask, Ledger Live, Coinbase Wallet, and others; browser data such as cookies, saved passwords, autofill data, and session tokens; macOS Keychain data such as app passwords and Wi-Fi credentials; and files on the filesystem. Persistence is achieved via a LaunchDaemon (com.finder.helper.plist) running a hidden AppleScript which acts as a watchdog loop, restarting the malware within one second if terminated. These latest ClickFix attacks are yet another example of threat actors experimenting with new ways to exploit legitimate, popular platforms like OpenAI and X. Users need to be vigilant and avoid executing commands they found online, especially if they don't fully understand what they do. Kaspersky noted that, even after reaching these manipulated LLM conversations, a simple follow-up question asking ChatGPT if the provided instructions are safe to execute reveals that they aren't.
[2]
Hackers tricked ChatGPT, Grok and Google into helping them install malware
Ever since reporting earlier this year on how easy it is to trick an agentic browser, I've been following the intersections between modern AI and old-school scams. Now, there's a new convergence on the horizon: hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware. The warning comes by way of a recent report from detection-and-response firm Huntress. Here's how it works. First, the threat actor has a conversation with an AI assistant about a common search term, during which they prompt the AI to suggest pasting a certain command into a computer's terminal. They make the chat publicly visible and pay to boost it on Google. From then on, whenever someone searches for the term, the malicious instructions will show up high on the first page of results. Huntress ran tests on both ChatGPT and Grok after discovering that a Mac-targeting data exfiltration attack called AMOS had originated from a simple Google search. The user of the infected device had searched "clear disk space on Mac," clicked a sponsored ChatGPT link and -- lacking the training to see that the advice was hostile -- executed the command. This let the attackers install the AMOS malware. The testers discovered that both chatbots replicated the attack vector. As Huntress points out, the evil genius of this attack is that it bypasses almost all the traditional red flags we've been taught to look for. The victim doesn't have to download a file, install a suspicious executable or even click a shady link. The only things they have to trust are Google and ChatGPT, which they've either used before or heard about nonstop for the last several years. They're primed to trust what those sources tell them. Even worse, while the link to the ChatGPT conversation has since been taken off Google, it was up for at least half a day after Huntress published their blog post. This news comes at a time that's already fraught for both AIs. Grok has been getting dunked on for sucking up to Elon Musk in despicable ways, while ChatGPT creator OpenAI has been falling behind the competition. It's not yet clear if the attack can be replicated with other chatbots, but for now, I strongly recommend using caution. Alongside your other common-sense cybersecurity steps, make sure to never paste anything into your command terminal or your browser URL bar if you aren't certain of what it will do.
[3]
Attackers using ChatGPT to trick Mac users into installing MacStealer
Security researchers have found that attackers are using ChatGPT to trick Mac users into pasting a command line into Terminal which installs malware. Specifically, it installs MacStealer, which allows the attacker to obtain iCloud passwords, files, and credit card details. The attack targeted people who were searching Google for instructions on how to free up some disk space on a Mac ... Engadget's Sam Chapman says he has been following the growing trend of using AI to find new ways to implement old-school scams when he spotted the report from cybersecurity company Huntress. Hackers are apparently using AI prompts to seed Google search results with dangerous commands. When executed by unknowing users, these commands prompt computers to give the hackers the access they need to install malware. The attackers held a conversation with ChatGPT in which they introduced the Terminal command, made the chat public, and then paid Google to promote the link. Huntress said this made it appear at the top of Google search results for freeing up disk space on a Mac. The victim had searched "Clear disk space on macOS." Google surfaced two highly ranked results at the top of the page, one directing the end user to a ChatGPT conversation and the other to a Grok conversation. Both were hosted on their respective legitimate platforms. Both conversations offered polite, step-by-step troubleshooting guidance. Both included instructions, and macOS Terminal commands presented as "safe system cleanup" instructions. The user clicked the ChatGPT link, read through the conversation, and executed the provided command. They believed they were following advice from a trusted AI assistant, delivered through a legitimate platform, surfaced by a search engine they use every day. Instead, they had just executed a command that downloaded an AMOS stealer variant that silently harvested their password, escalated to root, and deployed persistent malware. The same was done with X's Grok chatbot. Search terms targeted were: It's a worryingly clever approach because it bypasses all of the built-in macOS protections, allowing the user to install the malware with no warnings. It exploits the fact that people trust the well-known brands of both Google and ChatGPT. Pasting commands into Terminal without understanding them is a dangerous thing to do at the best of times. If you do it at all, you should ensure that you absolutely trust the source. Sponsored results in Google are not at all trustworthy. It would be extremely easy for a non-technical user to fall for this, so you might want to alert your family and friends.
[4]
New MacOS malware exploits trusted AI and search tools
Campaign abused Google ads and trusted AI platforms, boosting credibility and infection success AtomicOS (AMOS) criminals are using a combination of malvertising and GenAI response poisoning to trick MacOS users into downloading malware. This is according to cybersecurity researchers Huntress, who claim not only to have observed the attacks in the wild, but to have replicated the same results as other victims, as well. In a blog post published earlier this week, Huntress said that AMOS maintainers first created two AI conversations: one with ChatGPT, and one with Grok. These conversations were about freeing up disk space on a MacOS device, and included instructions on how to do it. The instructions are fake, though, and instead tell the user to bring up the Terminal app and type in a command that downloads and runs the AMOS infostealer. From there, they purchased ad space on Google in order to promote these conversations. That way, when a user searches something like "how to clear disk space on MacOS", these poisoned conversations would be displayed at the very top of the search engine results page. Apparently, the trick worked, because Huntress was brought in to investigate a case of AMOS infections. For those who are unaware, AMOS is an infamous MacOS infostealer, capable of stealing sensitive data, passwords, cryptocurrency wallet information, and more. The scam works similarly to ClickFix, another technique that tricks victims into running Terminal commands. The only difference is that in this case, the victims are actually proactively searching for a solution to a real problem, rather than to a non-existent one. What makes this campaign more dangerous, is that it abuses not one, but three trusted services - Google's search engine, ChatGPT, and Grok's responses. At the end of the day, both of the conversations are hosted on their respective platforms, increasing the perceived legitimacy of both instructions. It is unclear how AMOS operators managed to get ChatGPT and Grok to display these results, though. Via Apple Insider
[5]
Attackers Are Spreading Malware Through ChatGPT
You (hopefully) know by now that you can't take everything AI tells you at face value. Large language models (LLMs) sometimes provide incorrect information, and threat actors are now using paid search ads on Google to spread conversations with ChatGPT and Grok that appear to provide tech support instructions but actually direct macOS users to install an infostealing malware on their devices. The campaign is a variation on the ClickFix attack, which often uses CAPTCHA prompts or fake error messages to trick targets into executing malicious commands. But in this case, the instructions are disguised as helpful troubleshooting guides on legitimate AI platforms. Kaspersky details a campaign specific to installing Atlas for macOS. If a user searches "chatgpt atlas" to find a guide, the first sponsored result is a link to chatgpt.com with the page title "ChatGPTâ„¢ Atlas for macOS - Download ChatGPT Atlas for Mac." If you click through, you'll land on the official ChatGPT site and find a series of instructions for (supposedly) installing Atlas. However, the page is a copy of a conversation between an anonymous user and the AI -- which can be shared publicly -- that is actually a malware installation guide. The chat directs you to copy, paste, and execute a command in your Mac's Terminal and grant all permissions, which hands over access to the AMOS (Atomic macOS Stealer) infostealer. A further investigation from Huntress showed similarly poisoned results via both ChatGPT and Grok using more general troubleshooting queries like "how to delete system data on Mac" and "clear disk space on macOS." AMOS targets macOS, gaining root-level privileges and allowing attackers to execute commands, log keystrokes, and deliver additional payloads. BleepingComputer notes that the infostealer also targets cryptocurrency wallets, browser data (including cookies, saved passwords, and autofill data), macOS Keychain data, and files on the filesystem. If you're troubleshooting a tech issue, carefully vet any instructions you find online. Threat actors often use sponsored search results as well as social media platforms to spread instructions that are actually ClickFix attacks. Never follow any guidance that you don't understand, and know that if it asks you to execute commands on your device using PowerShell or Terminal to "fix" a problem, there's a high likelihood that it's malicious -- even if it comes from a search engine or LLM you've used and trusted in the past. Of course, you can potentially turn the attack around by asking ChatGPT (in a new conversation) if the instructions are safe to follow. According to Kaspersky, the AI will tell you that they aren't.
[6]
Kaspersky warns of ChatGPT-themed macOS malware campaign
Image: Getty Images Kaspersky Threat Research has uncovered a new malware campaign targeting macOS users, exploiting paid Google search ads and shared conversations on the official ChatGPT website to distribute the AMOS (Atomic macOS Stealer) infostealer along with a persistent backdoor. According to Kaspersky, attackers are purchasing sponsored search ads linked to queries such as "chatgpt atlas" and redirecting users to what appears to be an installation guide for "ChatGPT Atlas for macOS". The page is hosted on chatgpt.com and presented as a shared ChatGPT conversation. In reality, the content has been generated through prompt engineering and stripped down to display only step-by-step installation instructions. The guide instructs users to copy a single line of code, open the Terminal application on macOS, paste the command, and grant all requested permissions. Kaspersky's analysis shows that executing the command downloads and runs a malicious script from an external domain, atlas-extension[.]com. The script repeatedly prompts users for their system password, validating it by attempting to execute system-level commands. Once the correct password is entered, the malware proceeds to download and install the AMOS infostealer using the stolen credentials, before launching it on the device. The infection method is a variation of the "ClickFix" technique, which relies on persuading users to manually execute shell commands that retrieve malicious code from remote servers. Once installed, AMOS harvests sensitive data that can be monetised or reused in subsequent attacks. This includes passwords and cookies from popular web browsers, data from cryptocurrency wallets such as Electrum, Coinomi and Exodus, and information from applications including Telegram Desktop and OpenVPN Connect. The malware also scans for TXT, PDF and DOCX files stored in Desktop, Documents and Downloads folders, as well as notes saved in the macOS Notes app, exfiltrating the data to attacker-controlled infrastructure. A backdoor In parallel, the campaign deploys a backdoor that is configured to persist across system reboots, providing attackers with remote access to compromised devices and duplicating much of AMOS's data-collection functionality. Kaspersky said the campaign highlights a broader trend in which infostealers have emerged as one of the fastest-growing cyber threats in 2025. Attackers are increasingly leveraging AI-related themes, fake AI tools and AI-generated content to enhance the credibility of their lures. The Atlas-themed activity extends this trend by abusing a legitimate AI platform's content-sharing features. Read: Inside Kaspersky's plan to build cyber immune systems for the GCC "What makes this case effective is not a sophisticated exploit, but the way social engineering is wrapped in a familiar AI context," said Vladimir Gursky, malware analyst at Kaspersky. "A sponsored link leads to a well-formatted page on a trusted domain, and the 'installation guide' is just a single Terminal command. For many users, that combination of trust and simplicity is enough to bypass their usual caution, yet the result is full compromise of the system and long-term access for the attacker." Kaspersky advised users to exercise caution when encountering unsolicited guides that require running Terminal or PowerShell commands, particularly those involving one-line scripts copied from websites, documents or chat messages. The company also recommended verifying suspicious commands using security tools, avoiding unclear instructions, and ensuring reputable security software is installed and kept up to date on macOS systems.
[7]
Hackers trick ChatGPT and Grok to install malware onto devices: Report
During this chat, they prompt the AI to suggest entering a specific command in a computer's terminal. Cybercriminals have found a worrying way to use AI tools to spread malware on computer, and they are doing it by taking advantage of Google search results. According to Huntress, hackers are using AI chats to plant harmful instructions that show up at the top of common search queries, tricking people into running dangerous commands on their own computers. Here's how the scheme works. Attackers start a conversation with an AI assistant, such as ChatGPT or Grok, about a popular search topic. During this chat, they prompt the AI to suggest entering a specific command in a computer's terminal. That command is actually designed to give the hacker access to the victim's system. The attacker then makes the AI conversation public and pays to boost it so that it appears high in Google search results. When users search for that same topic, the harmful instructions appear like helpful advice. Also read: Android users can now share live video in emergencies, but there's a catch Huntress explains that this method already led to a real-world infection involving a Mac-targeting malware called AMOS. In that case, a Mac user simply searched "clear disk space on Mac," clicked a sponsored ChatGPT link in Google, and followed the terminal command shown in the AI chat. Running the command allowed hackers to secretly install the AMOS malware. It's important to note that harmful ChatGPT conversation stayed visible in Google search results for at least half a day after Huntress publicly reported the issue. Also read: US attorneys general warn OpenAI, Google and other AI giants to fix delusional chatbot outputs What makes this technique especially dangerous is that it avoids the usual warning signs of online scams. Victims do not have to download anything suspicious or click a strange link. For now, a simple rule can prevent major damage: never paste a command into your computer's terminal or browser bar unless you fully understand what it will do.
Share
Share
Copy Link
Cybersecurity researchers at Huntress and Kaspersky uncovered a sophisticated campaign using Google ads to promote malicious ChatGPT and Grok conversations. The attack tricks macOS users searching for troubleshooting advice into executing terminal commands that install AMOS infostealer malware. This social engineering tactic exploits user trust in established platforms to bypass traditional security measures.
A sophisticated malvertising campaign is leveraging the trust users place in ChatGPT and other AI chatbots malware to distribute the AMOS infostealer, a dangerous strain of macOS malware that targets sensitive data and cryptocurrency wallets. Cybersecurity researchers at Huntress and Kaspersky discovered that threat actors are purchasing Google ads malware placements to promote publicly shared conversations on legitimate platforms, directing victims to execute malicious terminal commands disguised as helpful troubleshooting instructions
1
2
.
Source: Lifehacker
The attack represents a dangerous evolution in social engineering tactics, exploiting user trust in established brands to bypass macOS security protections. When users search for common queries like "clear disk space on macOS" or "how to delete system data on Mac," sponsored search results malware links appear at the top of Google results, directing them to ChatGPT or Grok conversations hosted on legitimate LLM platforms
4
.The ClickFix campaign begins with attackers creating seemingly helpful conversations on ChatGPT and Grok about macOS troubleshooting topics. These conversations contain AI-generated malicious commands presented as safe system cleanup instructions. Huntress researchers reproduced these poisoned results across multiple variations, confirming this isn't an isolated incident but a deliberate, widespread poisoning campaign targeting common troubleshooting queries
1
.When victims execute the provided commands in Terminal, a base64-encoded URL decodes into a bash script that loads a fake password prompt dialog. Once the password is provided, the script validates, stores, and uses it to execute privileged commands, downloading the AMOS infostealer and executing the malware with root-level privileges
1
. The attack bypasses traditional red flags because victims don't download files or click suspicious links—they only trust Google and ChatGPT, platforms they've used before or heard about constantly2
.
Source: Digit
AMOS was first documented in April 2023 as a malware-as-a-service operation that rents the infostealer for $1,000 per month, targeting macOS systems exclusively. Earlier this year, AMOS added a backdoor module that lets operators execute commands on infected hosts, log keystrokes, and drop additional payloads
1
.Once installed, AMOS is dropped as a hidden file (.helper) in the user directory. The malware scans for cryptocurrency wallets including Ledger Wallet, Trezor Suite, Electrum, Exodus, MetaMask, Ledger Live, and Coinbase Wallet. When found, it overwrites legitimate wallet applications with trojanized versions that prompt victims to enter their seed phrase for supposed security reasons
1
. The infostealer also targets browser data including cookies, saved passwords, autofill data, and session tokens, as well as macOS Keychain data containing app passwords and Wi-Fi credentials5
.Persistence is achieved through a LaunchDaemon running a hidden AppleScript that acts as a watchdog loop, restarting the malware within one second if terminated
1
.Related Stories
This campaign demonstrates how threat actors exploit trusted AI platforms to conduct data exfiltration attacks that circumvent traditional security awareness training. The attack's effectiveness lies in its ability to weaponize the credibility of Google Ads, ChatGPT, and Grok simultaneously. Users are primed to trust what these sources tell them, making them vulnerable to executing commands they don't fully understand
2
.
Source: BleepingComputer
Kaspersky noted that even after reaching these manipulated conversations, a simple follow-up question asking ChatGPT if the provided instructions are safe to execute reveals that they aren't
1
5
. This suggests users should verify any technical instructions through additional queries before execution. Security experts recommend never pasting commands into Terminal or browser URL bars without absolute certainty about their function, especially when they request elevated privileges or come from sponsored search results3
5
.Summarized by
Navi
[1]
[5]
1
Policy and Regulation

2
Technology

3
Technology
