The news about the Wuhan coronavirus is bad and is getting worse. In terms of its potential for devastation, the current virus is in close competition with its 2003 cousin, the SARS coronavirus. China’s economy is far more essential to the global economy now than it was in 2003, giving new meaning to the old nostrum, when China sneezes, the world catches a cold. And this time the world’s capacity to catch that cold is far worse, in part due to the rise of social media.
Seventeen years ago, during the SARS scare, the world wasn’t hooked to social media. Today, we can expect digital viruses — the re-tweets, likeable posts, shareable memes — to rideshare with the coronavirus. Viral misinformation could worsen the global public health emergency. No doubt, social media can help as a powerful tool for public health messaging, educating and debunking myths. Unfortunately, the myth-makers tend to beat the educators and debunkers; according to a recent MIT study, false news is 70 per cent more likely to be retweeted than true stories, with truth travelling six times slower than falsehood.
But during this outbreak, the social media industry has performed better than it’s gotten credit for. But it can do more.
Rumours have gone into hyper-drive across platforms: they have stoked waves of Sinophobia and racism, blaming the outbreak on false claims that the Chinese have a regular habit of eating bats. The short-video sharing app, TikTok, has been particularly active, with numerous posts spreading misinformation. One misleading video was viewed 2.4 million times before it was removed and yet video duets — reactions to the original — lingered on, showing how difficult it is to kill digital falsehoods. Other posts baselessly claimed that the virus was created by the government for population control. The conspiracy group QAnon falsely claimed in a video that the virus’ creation was backed by Bill Gates.
Alarmist statistics have also been spreading — a tweet with over 140,000 “likes” predicted 65 million deaths, a debunked claim — along with false remedies, prophylactics and cures. Virality is assured when the misinformation jumps platforms.
The problem of containment gets worse when power users, such as politicians, give viral misinformation a boost. In the US, President Donald Trump helped amplify tweets from supporters of QAnon, the conspiracy group active in spreading coronavirus rumours.
Meanwhile, in China, doctors and frontline workers have been censored by authorities and some frontline reports were reportedly being taken down on WeChat. The Chinese state media even circulated a fake image of a building that it claimed was a hospital built in 16 hours.
Despite my healthy respect for the scale of the disinformation problem, I have to say that the lamest responses have come from Twitter and Google. Twitter prompts users searching for coronavirus to first visit authoritative sources, such as the CDC. A corresponding search on Google-owned YouTube, reportedly, links to a New York Times article. Neither seems ready to just take down patently false content.
Fortunately, other platforms have been more proactive. TikTok has removed some coronavirus misinformation and WeChat claims to have done the same. (They may find this easier because they have a history of censorship.)
The biggest and most pleasant surprise is Facebook. Its past strategy favoured labelling content as misleading rather than removing it. This time, it is limiting the distribution of posts rated false by third-party fact-checkers and using the News Feed to steer users to authoritative sites. It is giving free advertising credits to organisations running coronavirus education campaigns and has added a resource page for spotting falsehoods. However, the biggest change at Facebook is its announcement that it would remove content and block or restrict hashtags on Instagram that spread coronavirus misinformation.
I would quote Facebook’s Head of Health: “We are doing this as an extension of our existing policies to remove content that could cause physical harm.”
This elevated “physical harm” standard is one that all other social media platforms, including Twitter and Google/YouTube, ought to adopt. Of course, this requires establishing reliable fact-checking partnerships. There are 195 fact-checking organisations; so, that isn’t impossible. YouTube has launched a limited fact-checking initiative, but urgently needs to expand it. Twitter needs to launch one.
Social media’s response to this virus could not only slow the speed of viral falsehood, but also slow the rate at which the public is losing trust in the industry.