AI chatbots and assistants have quietly become the first stop for routine tech help—from cleaning up storage to fixing obscure errors on macOS. That shift in user behaviour is now being weaponised. Cybersecurity teams are tracking campaigns where attackers embed malicious “fixes” inside AI-style guides and shared chatbot conversations, then drive Mac users to these pages through poisoned search results and paid ads.
Instead of asking victims to download a suspicious file, the attackers present a polished, step‑by‑step answer that ends with a seemingly harmless instruction: copy a one‑line command into Terminal to “install” a tool, “update” a browser, or “clean” the system. Run that command, and the user effectively installs the malware themselves.
How ClickFix Attacks Evolve in the Age of AI
These campaigns borrow from a social‑engineering technique known as ClickFix, where victims are coaxed into executing commands under the pretext of solving a problem or proving they are human. Traditionally, this involved fake error pop‑ups, CAPTCHA prompts, or spoofed system alerts. Now, the lure has moved into AI.
Researchers from Kaspersky, Huntress and others describe a pattern: a user searches for something like “free up space on Mac” or “clear system data on iMac”, clicks a sponsored link or top result, and lands on what appears to be a real ChatGPT or Grok conversation. The shared chat looks like a normal troubleshooting exchange, with a concise “solution” at the bottom. That solution instructs the user to run a Terminal command which, in reality, downloads and launches an info‑stealing payload.
What the AMOS Stealer Does on macOS
The main payload linked to these AI‑poisoning campaigns is Atomic macOS Stealer, widely known as AMOS. Once the Terminal command is executed, a script is fetched from a remote server, often prompting the user to enter their macOS password under the guise of installing or authorising a tool. With those credentials, the malware is installed with elevated privileges and set to persist across reboots.
AMOS is built to harvest valuable data at scale. It can pull browser cookies, saved passwords, autofill data and session tokens from popular browsers, scrape credentials from macOS Keychain, and specifically target cryptocurrency wallets and finance‑related applications. In many observed cases, it also searches Desktop, Documents and Downloads for text, PDF and document files, exfiltrates them to an attacker‑controlled server, and may deploy a backdoor module that allows remote command execution and further payloads.
Why This Attack Works So Well
The success of these campaigns has less to do with exotic exploits and more to do with trust. Users see familiar brands (Google, OpenAI, Grok), clean UIs and fluent language, and unconsciously treat what they are reading as vetted advice. The AI format adds an extra layer of credibility: the instructions look neutral, technical and routine, so the copy‑paste step feels low‑risk.
To make matters worse, AI platforms’ share‑chat features were designed to make it easy to circulate helpful conversations. Attackers are abusing that design, crafting conversations with prompt engineering, trimming away incriminating context, and then pushing the final “guide” via SEO and ad campaigns. To the victim, the URL, branding and content all appear legitimate—even though the end goal is to make them bypass their own security instincts.
Practical Defences for Users and Enterprises
Security professionals now emphasise a simple rule: no matter where they come from—AI chats, forums, docs or email—commands that ask you to open Terminal or PowerShell should be treated as high‑risk, not routine. If a “fix” asks you to paste and run a one‑liner, especially one piped from a remote URL, stop and question it.
For individual users, safer practices include verifying commands with a trusted IT contact, asking a separate tool to explain what a command does in plain language, and avoiding any “support” journey that starts from a search ad. For organisations, tightening macOS endpoint controls, monitoring for AMOS indicators, and updating security awareness training to explicitly cover AI‑poisoned guides and ClickFix‑style lures are now essential. As AI becomes the main interface for support, treating it as inherently untrusted input—not an automatic authority—is the only way to keep that convenience from turning into a compromise.
