With large language models (LLMs) like ChatGPT, Claude, and Gemini becoming integral to modern applications, Kaspersky has announced a new training program to tackle the rising risks tied to these powerful AI systems. Developed by the Kaspersky AI Technology Research Center, the ‘Large Language Models Security’ course is designed to prepare cybersecurity professionals, AI engineers, and developers to safeguard LLM-powered environments.
As AI and IoT become core parts of enterprise infrastructure, understanding their security implications is no longer optional — it’s critical.
LLMs: High Potential, High Risk
Kaspersky’s own research shows that over 50% of organizations have already integrated AI and IoT into their systems. But this rapid adoption is not without consequence. LLMs introduce new vulnerabilities such as:
Jailbreaks: Attempts to override restrictions placed on AI outputs.
Prompt injections: Malicious prompts that hijack model behavior.
Token smuggling: Exploiting hidden data pathways for unauthorized access.
Kaspersky notes that without proper defenses, these risks can be exploited to leak sensitive data, manipulate outcomes, or breach applications that rely on LLMs for user interaction or automation.
Hands-On Learning for Real-World Threats
Led by Vladislav Tushkanov, Research Development Group Manager at Kaspersky, the course delivers an interactive, hands-on curriculum. Participants will:
Explore common LLM exploitation techniques using simulated labs.
Learn to design layered defense strategies across models, prompts, systems, and services.
Use structured frameworks to evaluate and improve AI security posture.
The training incorporates real-world case studies, making it ideal for professionals building AI applications or working with enterprise-grade AI systems.
Designed for a Broad Range of Professionals
Whether you’re a cybersecurity engineer, AI developer, risk analyst, or even just beginning a career in AI security, this course provides essential skills for the evolving threat landscape. Kaspersky highlights the course as part of its broader Cybersecurity Training portfolio, which now includes specialized tracks in:
AI threat mitigation
IoT security
Secure AI system design
The training is offered in an online format with video lectures, practical labs, and structured assignments.
Why It Matters: A New Generation of Threats
As AI becomes central to decision-making, automation, and communication, attackers are quickly adapting. LLMs are being exploited not just for prompt manipulation, but also to power phishing, spam generation, malware writing, and more.
Tushkanov says, “LLMs have redefined the frontier of cybersecurity. Our training aims to ensure professionals aren’t just chasing threats — they’re prepared for what’s coming.”
Kaspersky positions this course as a must-have foundation for any organization embedding AI into its digital fabric.
