Tech giant Meta and a cross-industry alliance on Monday announced a joint effort towards launching a fact-checking helpline on messaging platform WhatsApp to combat the menace of artificial intelligence (AI)-generated deepfakes.
The helpline, expected to be available for public use next month, will allow users to flag deepfakes by alerting a dedicated WhatsApp chatbot. The chatbot will offer multilingual support — in English as well as Hindi, Tamil, and Telugu, said Meta, which has joined hands with the Misinformation Combat Alliance (MCA).
“We recognise the concerns around AI-generated misinformation and believe combating this requires concrete and cooperative measures across the industry,” said Shivnath Thukral, director, Public Policy India, Meta.
“Our collaboration with MCA to launch a WhatsApp helpline dedicated to debunking deepfakes that can materially deceive people is consistent with our pledge under the Tech Accord to Combat Deceptive Use of AI in 2024 Elections,” added Thukral.
Last week, a group of 20 leading tech companies across the globe — including Microsoft, Meta, Google, Amazon and IBM — signed an agreement to tackle AI-generated misinformation ahead of the 2024 elections.
In November last year, Union Minister of Electronics and Information Technology Ashwini Vaishnaw stressed watermarking and labelling of content as an approach to tackle deepfakes, after holding two rounds of discussions with intermediaries on the issue of deepfakes and misinformation.
The minister had said that though watermarking and labelling were basic requirements, many miscreants found a way to go around them.
More From This Section
The Indian government has also said that it would introduce stringent provisions to deal with deepfakes under the information technology (IT) rules, 2021, through a fresh amendment.
As a part of the collaboration with Meta in India, MCA will set up a central “deepfake analysis unit” to manage all inbound messages they receive on the WhatsApp helpline.
Further, it will work closely with other member fact-checking organisations as well as industry partners and digital labs to assess and verify the content and respond to messages accordingly, quashing false claims and misinformation, according to a press release.
“The Deepfakes Analysis Unit will serve as a critical and timely intervention to arrest the spread of AI-enabled disinformation among social media and internet users in India,” said Bharat Gupta, president, MCA.
“The initiative will see International Fact Checking Network signatory fact-checkers, journalists, civic tech professionals, research labs and forensic experts come together, with Meta’s support,” he added.
The programme will follow a four-pillar approach — detection, prevention, reporting and driving awareness — to stop the spread of deepfakes, said a press release.
Earlier this month, Meta also announced an “AI labelling policy” in which it plans to collaborate with other industry partners and develop “common technical standards” that will help in tagging AI-generated content.
“Since AI-generated content appears across the internet, we’ve been working with other companies in our industry to develop common standards for identifying it through forums like the Partnership on AI (PAI),” said the company in a blogpost.
Meta, which owns popular social media platforms WhatsApp, Facebook and Instagram, had also introduced a fact-checking programme in India in partnership with 11 independent partners in 2022.