Hackers exploit Google Ads and AI chatbots to spread macOS malware through poisoned conversations

Reviewed byNidhi Govil

5 Sources

Share

Cybercriminals are weaponizing trust in Google Ads and AI chatbots like ChatGPT and Grok to spread macOS malware. Attackers create public AI conversations with malicious Terminal commands, then pay to boost them in search results. When Mac users search for common troubleshooting queries, they unknowingly install AMOS infostealer malware that harvests passwords, cryptocurrency wallets, and sensitive data.

Hackers Weaponize AI Chatbots and Google Ads to Deploy macOS Malware

Cybercriminals have developed a sophisticated attack vector that exploits the trust users place in AI chatbots and search engines to spread macOS malware. Researchers at Huntress and Kaspersky identified a campaign where threat actors create seemingly legitimate conversations with ChatGPT and Grok, then use Google Ads to promote malicious links that appear at the top of search results for common Mac troubleshooting queries

1

2

. The attack demonstrates how malvertising combined with response poisoning can bypass traditional security awareness training.

Source: Engadget

Source: Engadget

The campaign specifically targets users searching for terms like "how to clear data on iMac," "clear system data on iMac," and "free up storage on Mac"

1

. Huntress researchers reproduced these poisoned results across multiple variations, confirming this represents a deliberate, widespread campaign rather than isolated incidents

1

.

How Attackers Trick Mac Users with Poisoned AI Responses

The attack methodology reveals careful planning. Threat actors first create public conversations on legitimate platforms where they prompt AI assistants to suggest pasting malicious Terminal commands disguised as system maintenance instructions

4

. These conversations are hosted on ChatGPT and Grok's official platforms, lending them perceived legitimacy. Attackers then purchase ad space to promote these conversations, ensuring they appear prominently when users search for help

4

.

Source: 9to5Mac

Source: 9to5Mac

When victims execute the command line instructions from these AI conversations, a base64-encoded URL decodes into a bash script that loads a fake password prompt dialog

1

. After the user provides their password, the script validates, stores, and uses it to execute privileged commands, downloading the AMOS infostealer and executing the malware with root-level privileges

1

.

Source: Digit

Source: Digit

AMOS Infostealer Malware Targets Cryptocurrency and Sensitive Data

The AMOS infostealer, first documented in April 2023, operates as a malware-as-a-service offering that rents for $1,000 per month and exclusively targets macOS systems

1

. Earlier this year, AMOS added a backdoor module enabling operators to execute commands on infected hosts, log keystrokes, and deploy additional payloads

1

.

When launched, AMOS drops on /Users/$USER/ as a hidden file (.helper) and scans for Ledger Wallet and Trezor Suite applications

1

. If found, it overwrites them with trojanized versions prompting victims to enter their seed phrase for supposed security reasons. The MacStealer variant also targets cryptocurrency wallets from Electrum, Exodus, MetaMask, Ledger Live, and Coinbase Wallet, along with browser data including cookies, saved passwords, autofill data, and session tokens

1

3

. It harvests macOS Keychain data containing app passwords and Wi-Fi credentials, plus files across the filesystem

1

.

Bypassing macOS Security Through Social Engineering

What makes this campaign particularly dangerous is how it circumvents traditional cybersecurity defenses. As Engadget notes, the attack bypasses almost all red flags users have been taught to recognize

2

. Victims don't download suspicious files, install questionable executables, or click shady links. Instead, they only need to trust Google and ChatGPT—platforms they've either used before or heard about constantly

2

.

This approach represents an evolution of ClickFix techniques, where victims are tricked into running malicious Terminal commands

4

. However, this campaign proves more effective because victims actively search for solutions to real problems rather than fabricated ones

4

. The data exfiltration occurs silently after users grant root access through what appears to be legitimate system maintenance

3

.

Response and Implications for AI Platform Security

The harmful ChatGPT conversation remained visible in Google search results for at least half a day after Huntress published their findings

2

. This delayed response highlights challenges in moderating content that exploits legitimate platforms. Kaspersky researchers noted that even after reaching these manipulated conversations, asking ChatGPT whether the provided instructions are safe to execute reveals they aren't

1

.

For users, the fundamental rule remains unchanged: never paste commands into Terminal or browser URL bars without fully understanding their function

2

5

. Sponsored results in Google searches should not be considered inherently trustworthy

3

. As OpenAI and other AI platforms face mounting competition and scrutiny, this incident underscores how threat actors continuously experiment with new methods to exploit trusted services

1

. Watch for similar campaigns targeting other operating systems as attackers refine these techniques.

Today's Top Stories

TheOutpost.ai

Your Daily Dose of Curated AI News

Don’t drown in AI news. We cut through the noise - filtering, ranking and summarizing the most important AI news, breakthroughs and research daily. Spend less time searching for the latest in AI and get straight to action.

© 2025 Triveous Technologies Private Limited
Instagram logo
LinkedIn logo