The Delhi Police Special Cell recently registered a first information report (FIR) in connection with deepfake AI-generated video of actor Rashmika Mandanna, putting the issue regulating such technology at the forefront.
Experts say a faster redress mechanism is the need of the hour.
The FIR has been registered under Sections 465 (forgery) and 469 (harming reputation) of the Indian Penal Code, 1860, and Sections 66C (identity theft) and 66E (privacy violation) of the Information Technology Act, 2000.
“The legal rights of artists in regard to deepfake videos are still a developing area of law. However, there are a few general principles that may apply, that is, artists have a copyright in their work; artists have the right to publicity; artists have the right to privacy,” said Kunal Sharma, partner, Singhania & Co.
In the video in question, the actor can be seen in a black workout outfit inside an elevator. The video was originally posted by British-Indian influencer Zara Patel, whose face was morphed with Mandana’s face using artificial intelligence (AI) tools.
Days after this video, a digitally altered or deepfake image of Katrina Kaif from her upcoming film Tiger 3 also surfaced online.
But how are such videos tracked?
Deepfake videos often contain artefacts that can be detected using technical analysis. “Law-enforcement agencies utilise specialised tools such as IP address tracking, Metadata Analysis, and Open-Source Intelligence (OSINT). Advanced technologies like facial recognition and content recognition algorithms play a pivotal role, aiding in identifying patterns in upload activity and tracking the spread of unauthorised content,” said Ankur Mahindro, managing partner, Kred Jure.
The current legal framework governing the tracking of unauthorised videos includes provisions under the Information Technology Act (Section 67A), the Copyright Act, 1957, and the Indian Penal Code, 1860 (Section 292). Courts have often intervened by directing the takedown of URLs or posts containing unauthorised content.
“In India, there is a specific provision in the Information Technology Act, 2000, that may be applicable in cases of deepfake crimes that involve capturing and transmitting a person’s images which may violate their privacy. Such an act if established can attract imprisonment of up to three years or a fine of up to Rs 2 lakh. Even a platform that carries such deep fakes can be held liable if it fails to take action upon notice,” said Safir Anand, IPR lawyer and joint managing partner, Anand & Naik.
The Delhi Commission for Women had sought action after the video of the actress went viral. It highlighted that no arrests had been made in the case to date and sought a copy of the FIR with details of the accused in the matter by November 17.
The Centre has issued an advisory to social-media intermediaries, instructing them to exercise due diligence and make efforts to identify misinformation and deepfakes.
The government said the intermediaries were required to take action against such cases within the timeframes stipulated under the IT Rules, 2021, and prevent users from hosting such information or content. They said any content reported must be removed within 36 hours of reporting.
However, proper regulation is required, say experts.
“A legal framework for deepfake videos should include a clear definition of what constitutes one such, as well as a prohibition on certain uses of deepfake videos, such as using them to commit fraud, impersonate others, or interfere with elections,” said Sharma.
The problem arises with the time taken to redress the problem.
“The significant challenge lies in the rapid dissemination of these videos. Establishing an active watchdog mechanism is imperative to effectively address this issue. However, it is noteworthy that our existing court processes, while thorough, tend to operate with a deliberate pace,” said Meghna Mishra, partner, Karanjawala & Company.
“Additionally, social-media platforms and other online platforms should be required to take steps to detect and remove deepfake videos. Victims of deepfake videos should also have the ability to seek legal recourse from the creators and distributors of the videos,” Sharma said.
Mandanna wrote, “If this happened to me when I was in school or college, I genuinely can’t imagine how I could ever tackle this.”
Mishra concurred with this view.
“While stringent laws exist to protect minors in such situations, the urgency with which action is taken to remove compromising videos becomes a critical factor. The delayed response to address these issues may inadvertently contribute to severe consequences, particularly when it involves minors who may be driven to take extreme measures,” she said.
A number of countries are currently developing legal frameworks for deepfake videos. For example, the European Union has proposed legislation that would require social-media platforms to take steps to detect and remove deepfake videos. The United States too has proposed legislation that would prohibit certain uses of deepfake videos.
However, it is important to balance the rights of artists for fair use.
“Deepfake videos can be used to create and share artistic and satirical content, and it is important to ensure that the law does not unduly restrict this right,” said Sharma.
Face to face threat
- Deepfakes can create concerns about identity theft, online obscenity, frauds, cyber-terrorism, and other tools of misinformation
Expert view
- Establishing an active watchdog mechanism to address this issue
- Social media platforms should take steps to detect and remove such videos
- Important to consider the rights of artists for fair use
How to identify AI-modified videos
- Blurring or distortion around the face
- Video may have difficulty in matching the light and shadows of the original video