India Take First Step Towards Regulating AI & Deepfake Content

India has taken the first step towards regulating the increasing threat of deepfakes and AI-generated misinformation. As part of it, the Ministry of Electronics and Information Technology (MeitY) on October 22, 2025, released draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

With deepfakes becoming more sophisticated and generative AI tools being used heavily, concerns over identity theft and organized misinformation campaigns has increased.

According to the new rules, social media platforms should mandate their users declare any AI-generated or AI-altered content. The obligation to label content will be on social media intermediaries, and the companies owning the social media platforms may flag accounts of users who violated the rules.

The rules make it mandatory for the companies to visibly post AI watermarks and labels across more than 10% of the duration or size of the content. The SM companies can also lose their safe harbor protection if violations are not dealt proactively. The ministry has given time till November 6 to the industry stakeholders to provide feedback on the draft amendments.

Deep Concern

The draft rules reflect the growing concerns about rising deepfakes or fabricated content that resemble a person’s appearance, voice, mannerisms, or any other trait. Generating deepfakes and synthetic content has been made easier by OpenAI’s ChatGPT and Google’s Gemini.

Union IT Minister Ashwini Viashnaw said the amendment raises the accountability for users, companies and government alike.

The Centre had already consulted top AI firms, and they said that using metadata to identify AI-altered content is possible. The rules have been prepared in accordance with that understanding, an IT official said.

X