India is witnessing a surge in cybercrime driven by AI-powered chatbots, according to new research from Quick Heal Technologies. These scams mimic banks, government agencies, delivery firms, and even family members using natural-sounding conversations and brand impersonation at industrial scale.
Quick Heal’s malware research arm, Seqrite Labs, flagged thousands of new AI-built fraud tools being detected monthly. These automated chatbots simulate human conversations, adapt in real time, and exploit breached data to personalise messages. The result: a new wave of scams that are faster, more convincing, and harder to detect than traditional phishing.
Conversational fraud with emotional triggers
The report highlights a shift from static phishing pages to dynamic, dialogue-based manipulation. AI chatbots can pose as customer support agents, fake delivery services, or even voice-cloned relatives in distress. Many start with routine queries — like delivery issues or security verification — and escalate to credential theft, payments, or crypto scams.
Romance scams have evolved too, with bots holding long-term emotional conversations, complete with AI-generated photos, before asking for money. Some AI fraud operations maintain thousands of conversations at once using a single server.
Precision mimicry of trusted brands
Fraudsters are leveraging tools like FraudGPT to build highly targeted attacks that adjust tone and content based on the victim’s profile. Spoofed domains like dhi-delivery.com are nearly indistinguishable from real sites. Chatbots may greet victims by name, refer to actual addresses or transaction history, and use official-looking logos scraped in seconds.
Attackers exploit trust in chat interfaces, often bypassing traditional detection systems by tailoring language and responses. Some scams have even disguised themselves as Meta Security or DHL support bots, luring users into entering OTPs, card numbers, or passwords.
AI fraud now needs AI defences
Quick Heal emphasised that conventional instincts — like trusting a familiar tone or layout — are no longer enough. Their antifraud system, Quick Heal Antifraud.AI, now uses real-time URL checks, scam domain blocking, and dark web credential monitoring. It also detects rogue apps and monitors suspicious access to microphones and cameras on Android devices.
The company urged users to treat any chatbot requesting sensitive information as a red flag. Organizations are advised to place stronger disclaimers on digital touchpoints, use verified domains only, and invest in anti-fraud AI solutions to stay ahead of attackers.
