The government has formally amended the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 to create a dedicated framework for “synthetically generated information” – effectively, AI-generated audio, video and images that can convincingly mimic real people or events. Coming into force on 20 February 2026, the changes tighten due‑diligence requirements for intermediaries and significant social media platforms, mandate labelling and provenance for synthetic content, and explicitly extend liability and takedown obligations to deepfakes used for unlawful acts. For enterprises, platforms and individual users, this marks a decisive regulatory shift from soft advisories to enforceable duties around AI content governance.
What the Rules Now Define as Synthetic, and What They Exempt
The amendment introduces a new category of “audio, visual or audio‑visual information”, covering any sound, image, graphic, video or recording, with or without audio, whether created or altered using a computer resource. Within this, “synthetically generated information” is defined as audio‑visual content that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a way that appears real, authentic or true, and portrays an individual or event so convincingly that it is – or is likely to be – indistinguishable from a real person or real‑world event.
Equally important are the explicit carve‑outs. Routine or good‑faith editing, formatting, enhancement, colour correction, noise reduction, transcription and compression are excluded, so long as they do not materially distort the substance, context or meaning of the original content. The rules also clarify that standard creation and use of documents, presentations, PDFs, training materials and research outputs, even when they contain illustrative, hypothetical or template‑based content, are not treated as synthetic if they do not create a false document or false electronic record. Similarly, the use of computer tools purely to improve accessibility, clarity, translation, description, searchability or discoverability, without altering any material part of the underlying audio‑visual information, falls outside the synthetic category.
For enterprises and content teams, this distinction matters: everyday editing and generative enhancements used to clean, translate or re‑format corporate content are not automatically treated as deepfakes, but once content is altered to realistically depict things that did not happen, it moves into a regulated zone.
Platform Duties: Detection, Labelling, Takedown and Faster Response
The rules clarify that whenever “information” is referred to in the context of unlawful acts under the intermediary framework, that reference now includes synthetically generated information, unless the context clearly indicates otherwise. Intermediaries are explicitly told that removing or disabling access to information – including synthetic content – using reasonable technical measures, automated tools or other mechanisms in compliance with these rules will not, by itself, be seen as violating the safe‑harbour conditions in section 79 of the IT Act. This gives platforms more legal comfort to act aggressively against harmful deepfakes.
All intermediaries must now periodically, at least once every three months, inform users through their terms, privacy policies or user agreements that non‑compliance can lead to immediate suspension of access, removal or disabling of information, and potential penalties under applicable laws.
Where non‑compliance involves illegal creation, hosting or sharing of information, users may face punishment under the IT Act or other laws; where it amounts to an offence, such as under the Bharatiya Nagarik Suraksha Sanhita 2023 or the Protection of Children from Sexual Offences Act 2012, intermediaries are required to report to the appropriate authority.
Additional obligations apply to intermediaries that provide tools capable of creating or disseminating synthetically generated information. Such entities must clearly warn users that directing or instructing their computer resources to create or share synthetic information in violation of the rules can trigger penalties under a wide range of laws – including the IT Act, the Bharatiya Nyaya Sanhita 2023, POCSO, the Representation of the People Act, the Indecent Representation of Women (Prohibition) Act, the Sexual Harassment of Women at Workplace Act, and the Immoral Traffic (Prevention) Act. Users must also be told that violations can lead to immediate disabling or removal of content, suspension or termination of their accounts, disclosure of their identity to complainants in certain cases, and mandatory reporting to authorities where laws require it.
The amendment significantly tightens response timelines. Where earlier rules referred to taking down notified content in “thirty‑six hours” or longer windows, the revised text reduces some obligations to “within three hours”, “seven days” and “two hours” depending on the context, signalling a push for near‑real‑time action on serious violations. For enterprises running user‑generated platforms, this requires upgraded workflows, 24×7 escalation and better integration between legal, trust‑and‑safety and engineering teams.
Deepfake‑Specific Due Diligence and Labelling Requirements
A new dedicated due‑diligence regime is created for synthetically generated information. Any intermediary whose computing resources may enable the creation, modification or dissemination of such content must deploy reasonable and appropriate technical measures – including automated tools – to prevent users from generating or sharing synthetic information that violates existing laws. The rules explicitly call out categories that must be blocked: child sexual exploitation material, non‑consensual intimate imagery, obscene or sexually explicit content, material that results in false documents or false electronic records, content relating to the preparation or procurement of explosives, arms or ammunition, and synthetic media that falsely depicts a person’s identity, voice, conduct, actions or statements, or misrepresents an event in a deceptive manner.
For synthetic content that is not outright illegal, the focus shifts to transparency. Every such item must be prominently labelled so that it is clearly visible on screen or accompanied by a clear audio disclosure in the case of audio‑only content, indicating that it is synthetically generated and created using a computer resource. To the extent technically feasible, such content must also embed permanent metadata or other technical provenance mechanisms, including a unique identifier, enabling identification of the intermediary’s computing resource used to create or modify it. Intermediaries are expressly prohibited from allowing the modification, suppression or removal of this label or metadata.
Significant social media intermediaries – the larger platforms meeting user‑base thresholds – face even more specific duties. Before displaying, uploading or publishing any information, they must require users to declare whether the content is synthetically generated, deploy technical measures (including automated tools) to verify the accuracy of those declarations, and, where content is confirmed as synthetic, ensure that it carries a clear and prominent label or notice. If such a platform is found to have knowingly permitted, promoted or failed to act on synthetic information in violation of the rules, it is deemed to have failed in its due‑diligence obligations.
What This Means for Enterprises, Platforms and Users
For enterprises operating in India – whether consumer platforms, enterprise SaaS, media, fintech or e‑commerce – these amendments transform AI content governance from a policy choice into a compliance requirement. Any business that offers tools for image, video, voice or other media generation, or that hosts user‑generated content, will need to:
- Map where their systems can create or transform audio‑visual content in ways that might qualify as synthetically generated information.
- Implement or enhance automated detection and classification of synthetic media, at least to the level of differentiating high‑risk categories (deepfakes, non‑consensual content, deceptive political or financial material) from routine edits.
- Build labelling, watermarking and provenance mechanisms into their content pipelines, and ensure these markers cannot be stripped or altered within their own services.
- Update user terms, onboarding flows and periodic communications to reflect the new warnings, enforcement actions and legal consequences set out in the rules.
For employees and individual users, the regime raises the stakes on misuse of generative AI. Creating or sharing deepfakes that misrepresent a person, fabricate documents, or fall into prohibited categories can now more readily trigger account suspension, exposure of identity to complainants in defined scenarios, and criminal reporting to authorities. At the same time, clearer labelling and provenance obligations should give genuine users and brands better tools to challenge impersonation, reputational attacks and manipulated media campaigns.
For India’s digital ecosystem, these rules signal a move toward an infrastructure where AI‑generated content is not banned, but must be controlled, traceable and transparently identified. Enterprises that invest early in robust AI governance, detection and labelling will be better placed to comply, protect their users and reduce exposure to enforcement under the IT Act, the new criminal codes and sectoral laws as deepfake incidents continue to rise.
