Don’t miss the latest developments in business and finance.

Surveillance fears over new rules to tame social media in India

For long, activists and governments have been prompted to demand greater transparency and accountability from Facebook, WhatsApp, Twitter and other social media behemoths

Fake news, social media, technology, misinformation
Yet, the newly notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, have met with frowns although they have been touted as an answer to a scourge
Geetanjali Krishna New Delhi
5 min read Last Updated : Mar 10 2021 | 6:10 AM IST
  • In 2020, three persons were killed in Maharashtra by a mob after rumours about thieves and child kidnappers spread on social media
  • A 2019 analysis of the Twitter feeds of 95 women politicians by Amnesty International India showed they collectively received over 10,000 problematic or abusive tweets daily!

Fake news and hate speech, much of it amplified by social media, has been dominating global discourse in recent times. For long, activists and governments have been prompted to demand greater transparency and accountability from Facebook, WhatsApp, Twitter and other social media behemoths.

Yet, the newly notified Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, have met with frowns although they have been touted as an answer to a scourge.

Business Standard examines their implications on digital privacy and freedom in the context of US think tank Freedom House’s report, Democracy under Siege, which has ranked India as “partially free” on its Internet Freedom ranking last week.

Big brother vs trolls

The Intermediary Rules mandate that all “significant social media intermediaries” (platforms with over five million users in India) must enable traceability of end-to-end encrypted messages so that the originator of hate speech, fake news et al could be identified — if required by a court or some other authority under Section 69A of the IT Act.

“While the rules clarify that the traceability order may only be passed for serious offences, some categories are open-ended and vague,” says Internet Freedom Foundation’s (IFF) Apar Gupta. “Used in tandem with Information Technology Decryption Rules (which empower the government to demand message content on social media), the government will break any type of end-to-end encryption to gain knowledge of who sent what message and also get to know its contents.” This has troubling implications, in the context of activists being targeted for social media posts.

For instance, in September 2019, activist Shehla Rashid was charged with sedition for her tweets on human rights violations by the Indian Army in Kashmir. In February 2020, poet Siraj Bisaralli was charged and later released on bail under Section 505 of the IPC after a recording of his recitation of a poem critical of the Citizenship (Amendment) Act and National Register of Citizens was shared on social media. Months later, chair of the Delhi Minorities Commission Zafarul Islam Khan was charged with sedition and promoting communal enmity through social media posts.

Where’s the law to protect user data?

Perhaps to identify trolls and bots, the regulations ask significant social media intermediaries to allow, even incentivise, users to “voluntarily” verify their accounts using government IDs. Given that the usage and transfer of personal data of citizens is regulated by the now antiquated IT Act, 2000, and the Personal Data Protection Bill, 2019, is pending in Parliament, this means that private entities will collect government ID data without any regulatory authority to ensure it is used only for verification.

The new mandate that social media intermediaries must retain user data for 180 days for investigative purposes adds to these concerns. “Presently, most of these platforms retain minimal user data and use E2E encryption to provide privacy to users,” Gupta says. “It is problematic in the absence of a data protection law and any kind of oversight on how surveillance operates in India.” IFF has a case pending in Supreme Court about the need to preserve privacy of metadata even if the actual content of messages is encrypted.

AI censors aren’t smart enough

Privacy concerns have prompted the Intermediary Rules to suggest artificial intelligence (AI) automated censorship of social media content. However, examples of AI censors missing the mark abound. In 2020, Croatian chess player Antonio Radić’s YouTube channel was blocked. Scientists at Carnegie Mellon suspect his discussion of “black vs white”, as well as use of words like “attack” and “threat” accidentally triggered YouTube’s AI filters. In fact, they analysed 680,000 comments on five, popular chess-themed YouTube accounts. A review of a random sample of the comments identified as hate speech revealed that 82 per cent were incorrectly flagged.

Need for digital literacy and accountability

Internet freedom and human rights advocates stress that the best way to counter fake news, trolling and pornography on social media is to insist upon transparent self-regulation, moderation and transparency from social media platforms — not increase state surveillance.

“Big tech should become more accountable to its users,” says Avinash Kumar, former head of Amnesty India. “They already have their own mechanisms of moderation and these need to be amped up and brought into the public domain.”

IFF believes that the enforcement of such wide ranging rules on social media ethics merits proper parliamentary debate. This would provide an opportunity for reservations to be aired. “There’s definitely good reason for fresh areas of regulation of social media,” says Gupta. “But it shouldn’t be at the cost of our right to privacy.”


Topics :Social Mediasurveillance digital mediaonline newsdigital newsTwitterFacebook