Govt Mandates 3-Hour Takedown Of Deepfakes, Tightens AI Content Rules

Govt Mandates 3-Hour Takedown Of Deepfakes, Tightens AI Content Rules

The government has amended IT Rules to impose stricter obligations on platforms handling AI-generated content, requiring removal of flagged deepfakes within three hours from February 20, 2026. The rules mandate clear labelling, permanent metadata, and automated detection of illegal material, while barring removal of AI identifiers to boost transparency and accountability online.

Vinay MishraUpdated: Tuesday, February 10, 2026, 05:54 PM IST
article-image
Deepfakes are the latest and most dangerous face of deception. | Pexels Image (Representative Image)

The Centre on Tuesday introduced stricter compliance requirements for social media and online platforms regarding artificial intelligence (AI)-generated and synthetic content, mandating that flagged material, including deepfakes, be removed within three hours of orders from courts or competent authorities.

The Ministry of Electronics and Information Technology (MeitY) notified amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. The revised regulations, set to take effect on February 20, 2026, formally define AI-generated and synthetic content for the first time.

Under the amended rules, “audio, visual, or audio-visual information” created or modified using AI to appear authentic will be classified as synthetic content. However, routine edits, accessibility enhancements, and legitimate educational or design uses have been kept outside the scope.

A major change includes categorising synthetic content as “information,” placing AI-generated material under the same legal framework used to assess unlawful online content. This effectively increases accountability for intermediaries hosting such material.

The notification sharply reduces the takedown timeline from 36 hours to just three hours once an official directive is issued. User grievance resolution deadlines have also been shortened to ensure faster responses.

Platforms that allow the creation or distribution of AI-generated material must now clearly label such content. Where technically feasible, companies are required to embed permanent identifiers or metadata to ensure traceability.

Additionally, intermediaries have been directed to deploy automated systems to detect and curb illegal AI content. The rules specifically target deceptive material, sexually exploitative or non-consensual content, false documentation, child abuse material, impersonation, and content linked to explosives.

The government also barred platforms from removing or altering AI labels once they have been applied, signalling a tougher regulatory stance as concerns over misuse of generative AI continue to grow.