The Ministry of Electronics and IT (MeitY) is drafting a new law focused on artificial intelligence (AI), which will notably avoid prescribing penal consequences for violations, recognising the technology’s significant benefits, according to a report by The Indian Express. This legislation will be a standalone law, which will require social media platforms such as Facebook, Instagram, YouTube, and X to include watermarks and labels on AI-generated content.
MeitY is also exploring legal frameworks to mandate companies developing large language models to train their systems on Indian languages and context-specific content.
The discussion around AI content warnings has been prominent since last year. In India, it gained attention after deep fake videos of actors and citizens surfaced and AI systems like Google’s Gemini showed inconsistencies in responses about political figures, including Prime Minister Narendra Modi.
Development of AI legislation in India
Last November, Union IT Minister Ashwini Vaishnaw announced plans to regulate the spread of deepfakes on social media, identifying them as a ‘threat to democracy’. Vaishnaw highlighted the government’s strategy focusing on deepfake detection, prevention, reporting, and public awareness.
On March 1, MeitY also issued an advisory mandating the labelling of under-trial AI models and prohibiting unlawful content. This directive has been reinforced by a recent advisory from the ministry requiring all AI-generated content to be labelled uniformly.
In May, IT Secretary S Krishnan further reassured the industry that while the government seeks to regulate AI, it will not stifle innovation. Reflecting on the approach taken with the Digital Personal Data Protection (DPDP) Act, Krishnan stated, “We will ensure that both the interests of innovation and protection of vital interests will come in in the future.”
What AI warning labels do tech companies carry?
Earlier this year, Meta announced the development of tools to identify ‘invisible markers’ in AI-generated content, in line with the standards set by the Coalition for Content Provenance and Authenticity (C2PA) in the United States, where the company is based. This initiative also aims to label images from major AI developers including Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.
More From This Section
Meta stated in April, “We will begin adding ‘AI info’ labels to a wider range of video, audio, and image content when we detect industry standard AI image indicators or when people disclose that they’re uploading AI-generated content.”
Similarly, in May, TikTok, which is banned in India, began automatically labelling AI-generated videos and images using ‘Content Credentials’, a digital watermarking technology from the C2PA.
Adobe, Arm, Intel, Microsoft, and Truepic have all also launched the C2PA, aiming to provide context and history for digital media through comprehensive systems.
Until India’s legislation is drafted, it is unclear to what extent the new law will impact the efforts already made by major global tech companies.