The decision by Meta (which owns Facebook and Instagram) to end its fact-check programme and replace it with “community notes” like X (formerly Twitter) was triggered by imminent change in America’s political order. Meta faces antitrust investigation, and President-elect Donald Trump claimed he was “very probably” responsible for Meta initiating the change. Mr Trump has, on multiple occasions, expressed anger at fact checkers. Meta Chief Executive Officer (CEO) Mark Zuckerberg also said the platform would reverse its 2021 policy of reducing political content. This means more content on hot-button subjects like immigration, gender, and religion will be posted. The change will start in the United States (US). Meta may find it harder to switch in places like the European Union, which have more stringent regulations about hate speech and misinformation than the US. This decision alters how misinformation will be treated on two of the largest platforms. It also impacts the financials of 80-odd fact-checking organisations that work with Meta globally.
Fact checking was initiated after the 2016 US elections and the Brexit referendum, which were both influenced by rampant disinformation on Facebook. The decision to reduce political content was in response to user feedback. Third-party fact checkers were asked by Meta to verify content. Content rated “false” is downgraded in news feeds. If someone tries to share a false post, they are shown a note explaining why it is misleading. Twitter used a similar system until it was bought by Elon Musk, who replaced it with community notes. This allows users to collaboratively add context to misleading posts on X, relying upon a reader consensus rather than moderation.
While third-party fact check was by no means perfect, the X experience suggests that it was better than community notes. This move to supposedly enable free speech has led to an explosion of hate speech, abuse and harassment, and violent content. The first Transparency Report released by X after Mr Musk took over says 5.3 million accounts were banned for abusive behaviour between January and June 2024. This is over three times the 1.6 million accounts banned in the same period of 2022, before Twitter changed hands (October 2022). X also shares ad revenue with “premium posters”. The combination of community notes and revenue sharing is a recipe for disaster. Controversial posts receive higher engagement, and posters that generate controversy get more revenues. The community notes system may also lead to content from and about public figures being mislabelled through concerted action by their opponents, which contradicts basic principles of free speech.
Misinformation on social media about the pandemic contributed to the crisis, with many individuals seeking out quack medication and avoiding vaccination. Similarly, climate-change deniers receive louder megaphone in the absence of fact checks. Mr Zuckerberg admits Meta will “catch less bad stuff” after removing fact checkers. He hopes this will enable more free speech about topics that are mainstream discourse, and reduce censorship and prevent “fake positives” leading to the ban of innocent posters. However, conflating fact checks with censorship of free speech is usually done in bad faith by those who stand to gain. The two are not the same. Unfortunately Facebook, Instagram, and X dominate the social-media landscape, and with Meta falling in line with Mr Trump’s wishes, the change may lead to an amplification of misinformation and hate speech across all three platforms.
To read the full story, Subscribe Now at just Rs 249 a month