The Union government’s proposal to make labelling of synthetic content generated by Artificial Intelligence (AI) systems mandatory is a timely and necessary step. This regulation seeks to address a powerful technology capable of spreading misinformation, stoking social unrest, and damaging individual reputations.
If implemented as proposed, the rule would require creators of AI-generated visuals, videos, audio and even text to clearly label such content across at least 10% of a visual presentation. How this requirement would translate to non-visual formats like audio remains unclear, though the need for explicit identification applies to all media capable of misleading the public.
Unlike several other major economies where AI innovation and regulation are being debated as part of a broad national framework, India’s move appears piecemeal, introduced through amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. Every country faces the challenge of regulating AI just enough to prevent harm while still encouraging innovation and competitiveness.
The European Union, eager to give its members a strong start, has taken a risk-based approach. It prohibits AI activities deemed to pose “unacceptable risks” — such as social scoring or manipulative behavioural targeting — while requiring the highest compliance standards for “high-risk” applications in sectors like education, employment, law enforcement and public services. Labelling generative “deep fake” content, such as that proposed in India, falls under the “limited risk” category in the EU’s framework.
China, pursuing dominance in AI, has gone further by developing national standards for cybersecurity, data protection, and generative AI training models. It mandates both explicit and implicit labelling of synthetic content, embedding such information within metadata.
The United States, with its strong federal structure, faces a more complex regulatory environment, especially given the Donald Trump administration’s competitive stance towards China’s technological ascent. Synthetic content, including humour and satire, is now commonplace on social media — sometimes even shared by political leaders themselves. The EU’s approach allows such creative expression, requiring only that deep fakes be identified, without curbing artistic freedom.
This is the balance India must strike: protecting democratic integrity while nurturing innovation. Effective AI regulation must rest on strong institutions — clear complaint mechanisms, transparent appellate bodies, and an overarching framework. Labelling deep fakes is a welcome start, but it must evolve into a comprehensive regulatory regime.