When a spate of mob lynchings driven by rumors spread on WhatsApp killed over 20 people in India over the course of June and July in 2018, the company was forced to act.
On the heels of a reprimand from India’s Ministry of Electronics and IT, WhatsApp made two major technical changes. Globally, the company limited message forwarding from 256 to 20 groups or contacts at a time worldwide. In India, it reduced this number to just five. The company also created a help page, only available in India, with contact details of a WhatsApp grievance officer.
Of course, other factors helped lay the groundwork for these mob lynchings to occur — dysfunctional and corrupt law enforcement, weak rule of law and Hindu nationalist policies and rhetoric all played a role. Nevertheless, when it seemed to recognize its role as a catalyst in these crimes, Facebook responded.
When we see a strong correlation between disinformation on social media and threats to public safety or democracy, when are the stakes high enough for a company like Facebook to take action? What actions can it take that will actually prove effective?
The question is painfully relevant for many Brazilians this week, where far-right candidate Jair Bolsonaro was elected president on October 28.
A former army captain and congressman, Bolsonaro ran a divisive, emotionally-driven campaign propelled by fake news shared on social media — especially on WhatsApp. He has promised to end government corruption and street crime, while also openly expressing misogynistic and homophobic views and praising the use of torture during Brazil's military dictatorship.
Disinformation promoting Bolsonaro had been circulating on social media in Brazil for months before the election, and there’s ample evidence from fact-checking groups that these messages — while not all in favor of Bolsonaro — overwhelmingly supported him.
But the engine behind some of these disinformation campaigns only became publicly known when Folha de S. Paulo, Brazil's largest newspaper, found that a group of companies were secretly bankrolling the mass spread of slanderous messages about Bolsonaro’s rival Fernando Haddad via WhatsApp.
Just ten days before the final vote, Folha reported that business associates of the campaign paid around USD $3 million to the companies that ran the scheme. Alongside the legal implications of this scheme for Bolsonaro’s campaign (it appears to have violated campaign finance laws), the revelations raise serious questions about how WhatsApp’s technology and market predominance helped to make the scheme possible.
Bolsonaro’s election disinformation campaign
Over the course of election weekend, a coalition of fact-checkers flagged 20 false or misleading stories circulating on multiple platforms — 16 of them were partial to Bolsonaro's campaign promises or overall ideology.
From August – October, the well-regarded fact-checking group Agência Lupa was able to conduct research on the circulation of misleading images in 347 different WhatsApp groups where political discussion was taking place. The agency found that out of the 50 top-forwarded images, only four were completely truthful.
Caption: widely shared image falsely depicting Brazil's former president Dilma Rousseff with Fidel Castro. Here is the original photo, taken in April 1959 during Castro's visit to New York. Rousseff was 11 years old at that time.
The most popular image among these groups was a black and white photo of Fidel Castro from 1959, doctored to have replaced the face of a woman at his side with that of Brazil's ex-president Dilma Rousseff. This is an example of an image that doesn't explicitly mention Bolsonaro, but backs his public claims that the Worker's Party, and specifically Haddad, are associated with communism.
Could Facebook have done more?
When Brazilian fact-checkers approached the company in early October, proposing changes similar to those that the company made in India, WhatsApp replied that “there wasn't enough time” to make such changes. But after the Folha revelations, WhatsApp banned over 100,000 accounts in Brazil for spam.
After the account removals, WhatsApp CEO Chris Daniels wrote an op-ed for Folha, in which he listed the measures WhatsApp had taken to combat disinformation on its platform. But the technical tweaks he mentioned were implemented worldwide in July. None of them were specific to Brazil.
As far as we know, it is only in India that the company taken action at the technical level, by actually altering the message-sending capacity of the service. This week, many Brazilians may look to India and wonder, why couldn’t Facebook do the same for us?
On WhatsApp, end-to-end-encryption makes it virtually impossible to moderate content. The only way to change the dynamics of information flow on WhatsApp is to technically alter its capabilities, as it has by changing the number of groups that can receive a forwarded message. But even then, it is incredibly difficult to know what kinds of interventions can work to quell the spread of disinformation at this scale. Have the measures in India proven effective? This is almost impossible to measure, given the way WhatsApp is built, and the presence of external factors.
What is clear is that, even with the changes the company made worldwide this past summer, schemes like Bolsonaro’s still can be extremely effective on WhatsApp.
What about regulation?
Could Brazilian electoral officials have pushed Facebook to take further action? Maybe, but in contrast to India (a country that has shown no fear when it comes to blocking internet services) Brazil's Bill of Rights for the Internet, locally known as the “Marco Civil”, shields companies like Facebook from liability for the effects of content posted on their platforms, similar to Section 230 of the Communications Decency Act in the United States.
Brazil’s judiciary has not been shy about punishing companies when they do cross the lines of liability. But in this case, though the Electoral Court has acknowledged the profusion of election-related disinformation flooding social media sites in Brazil since December 2017, it failed to act. Even if they had made recommendations or found a way to force Facebook to make greater changes, it’s not clear that these would have helped — and they may have hurt in other ways, too.
What has happened in Brazil shows that Facebook may still not know its power when it comes to disinformation. With or without regulation or technical changes, the limits of WhatsApp are hard to decipher.
What we do know is that the 45 million people who voted for Fernando Haddad, Bolsonaro's defeated opponent, have watched a candidate who openly calls for his political opponents to be shot be carried to victory, helped by disinformation spread primarily on multi-billion-dollar private platforms.
Bolsonaro himself continues to openly vilify the traditional press. And with his inauguration fast approaching, he continues encouraging his supporters to connect with him on WhatsApp.