Spreading fake news on social media is among the easiest of digital tasks — most propagators resort to forwarding it on WhatsApp, or “liking” it on Facebook, and retweeting on Twitter — starkly in contrast to combating it, flagging it, and preventing it from going viral. Many methods have been tried, but with very limited success. Fake news has been used for all sorts of malign purposes, ranging from influencing elections through lies to slandering individuals and companies to sparking lynch mobs. WhatsApp, the popular instant messenger system (it is a Facebook subsidiary), is now making a determined effort to combat the spread of fake news on its network. There are imperatives for WhatsApp to do this — it has been indicted in the spate of cases of lynching that have plagued India and it has received warnings from the government. There is a hard deadline — the system must be up and running before the General Elections of 2019. To be sure, WhatsApp already has a model to combat fake news that it put to use in the recent elections in Mexico.
Still, the technical contours of the problem and the scale in India are daunting. WhatsApp is installed on about 450 million Indian handsets. Over 200 million use it daily. Over 90 per cent of messages are from one individual to another. The average WhatsApp group contains six to eight persons. Many messages are in regional languages, often written in Roman, which scrambles machine-based analysis. Moreover, end-to-end encryption in the system is an article of faith for WhatsApp. Apart from senders and recipients, nobody, not even the service providers, can know the content posted, without access provided by senders or recipients. Given a closed communication system of this nature, purely technical solutions are impossible. Behavioural changes are necessary to stop users from virally sending out fake news and to persuade users to send dubious content to the service provider for verification.
It, therefore, requires a multi-pronged effort. WhatsApp has started offering grants — of the order of $50,000 — to social scientists who can research the issues and find ways to induce such behavioural changes in users. On the technical side, it has already instituted a system where forwards are clearly marked as such. It has also put a ceiling on the number of forwards for a given message. In India, this is set at a limit of five forwards for a given message — it’s 20 for the rest of the world. Of course, each recipient can forward a message five times (although copy-paste seems to avoid the restriction). These restrictions could slow messages going viral.
Before the Mexico elections, WhatsApp created a “verificado” system. It worked with independent local fact-checkers and set up phone numbers for users to forward dubious content. The fact-checkers would verify and stamp the content as either genuine or fake. Apparently, this proved to be of utility in Mexico. In India, WhatsApp is now working to develop similar alliances with independent fact-checkers and computer scientists. The idea is: Content forwarded will be verified for truth and colour-coded with a traffic light system of green for “true”, amber for “be careful” and red for “fake”. In addition, WhatsApp may try to further restrict the number of possible forwards temporarily, during the critical last 48-hour period before polling starts anywhere.
It remains to be seen how effective this verification model will prove in India, given the bewilderingly complex mix of languages and of local/regional/national issues. This will ultimately, stand or fall, on the service provider's ability to actually induce behavioural change in users. Sadly, every political formation has a vested interest in keeping the fake news channels selectively open. So, despite pious protestations on all sides, there may be a lack of political will to support this initiative.
To read the full story, Subscribe Now at just Rs 249 a month