Kaspersky Flags OpenAI Abuse in OpenAI Spam Campaign

Kaspersky researchers have identified scammers exploiting OpenAI’s legitimate organisation and team invitation features to deliver spam from authentic platform addresses. Attackers embed fraudulent links and phone numbers directly into the organisation name field during registration, bypassing traditional email filters. When victims receive invitations from “OpenAI,” the deceptive content appears as structurally irregular text within otherwise legitimate templates.

Multi-Stage Abuse Leverages Platform Legitimacy

Scammers first create OpenAI accounts, entering malicious payloads in the organisation name prompt that supports arbitrary symbols. The platform’s “invite your team” function then distributes invitations containing embedded threats to target email lists. Messages promote fake adult services or deliver vishing attacks posing as subscription renewal notifications for large sums. Victims are directed to call fraudulent numbers for “cancellation,” leading to further compromise. Anna Lazaricheva, Kaspersky senior spam analyst, notes attackers weaponise platform features for social engineering, exploiting user trust in reputable services.

Embedded Deception Bypasses Traditional Defences

The scam’s effectiveness stems from technical legitimacy: invitations originate from genuine OpenAI infrastructure with malformed organisation names containing bolded scam text. Victims overlook inconsistencies within collaboration-focused templates. Additional threats likely propagate through similar abuse vectors. Kaspersky urges verification of all unsolicited platform invitations, manual URL inspection before clicking, and using official support channels rather than email-provided numbers. Multi-factor authentication strengthens account protection against unauthorised access.

Platform Feature Abuse Signals Broader Risks

This campaign highlights vulnerabilities when legitimate services enable unstructured text input combined with distribution mechanisms. Scammers bet on recipient distraction amid expected collaboration notifications. Enterprises must train users to scrutinise platform communications regardless of source authenticity. Kaspersky recommends immediate reporting of suspicious platform activity to providers. As AI platforms proliferate, feature-level abuse risks escalate, demanding proactive monitoring and user education.

Defensive Recommendations for Enterprises

Organisations should implement email filtering rules targeting irregular OpenAI domains and malformed organisation names. User awareness training must emphasise platform invitation verification protocols. Incident response teams need playbooks for vishing containment and link analysis. Multi-vendor threat intelligence sharing accelerates detection of evolving tactics. Enterprises leveraging OpenAI at scale face elevated risks from authenticated abuse vectors requiring layered behavioural analytics.

Latest articles

Related articles