970x125
As artificially generated deepfakes flood social media sites, impersonating prominent politicians and movie stars, the Centre has framed draft rules that would require mandatory AI content labeling on social media platforms such as YouTube and Instagram, with the companies being required to seek a declaration from those posting on these platforms on whether the piece of content they have uploaded is synthetically generated information.
970x125
As per draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, the IT Ministry will also require platforms that allow for the creation of AI content to ensure that every such information is prominently labelled or embedded with a permanent unique metadata or identifier. In case of visual content, the label should cover at least 10 per cent of the total surface area. In case of audio content, it should cover the initial 10 per cent of the total duration.
A deepfake is a video in which a person’s face or body has been digitally altered so that they appear to be someone else. It is typically used to spread false information. To be sure, while media has always been edited in a way to mislead people, the advent of AI has made it a much bigger problem. In the Indian context, the issue first surfaced in 2023, when a deepfake video of actor Rashmika Mandanna entering an elevator went viral on social media. Close on the heels of that incident, Prime Minister Narendra Modi had called deepfakes a new “crisis”.
In an explanatory note, the IT Ministry said, “Recent incidents of deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create convincing falsehoods — depicting individuals in acts or statements they never made. Such content can be weaponised to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.”
As per the draft amendments, social media platforms would have to require users to declare whether information they are uploading is synthetically generated information and deploy “reasonable and appropriate technical measures,” including automated tools or other suitable mechanisms, to verify the accuracy of such declaration. Where such declaration or technical verification confirms that the information is synthetically generated, these platforms must ensure that the same is clearly and prominently displayed with an appropriate label or notice, indicating that the content is synthetically generated.
If social media platforms fail in this regard, they may lose the legal immunity they enjoy from third-party content. It means the responsibility of such platforms shall extend to taking reasonable and proportionate technical measures to verify the correctness of user declarations. They must ensure that no synthetically generated information is published without such a declaration or label.
Last month, China too rolled out its AI labeling rules, under which AI-generated content providers must now display clear labels to identify material created by artificial intelligence. Visible AI symbols are required for chatbots, AI writing, synthetic voices, face swaps, and immersive scene editing. For other AI-based content, hidden tags such as watermarks will suffice. Platforms must also act as monitors — when AI-generated content is detected or suspected, they must alert users and may apply their own labels.
© The Indian Express Pvt Ltd
970x125


