Facebook believes its policing system is better at scrubbing graphic violence, gratuitous nudity and terrorist propaganda from its social network than it is at removing racist, sexist and other hateful remarks.
Tuesday's self-assessment Facebook's first breakdown of how much material it removes came three weeks after Facebook tried to give a clearer explanation of the kinds of posts that it won't tolerate. The statistics cover a relatively short period, from October 2017 through March of this year, and don't disclose how long, on average, it takes Facebook to remove material violating its standards.
The increased transparency comes as the Menlo Park, California, company tries to make amends for a privacy scandal triggered by loose policies that allowed a data-mining company with ties to President Donald Trump's 2016 campaign to harvest personal information on as many as 87 million users. The content screening has nothing to do with privacy protection, though, and is aimed at maintaining a family-friendly atmosphere for users and advertisers.
Facebook removed 2.5 million pieces of content tagged as unacceptable hate speech during the first three months of this year. It says 62 percent of the offending content was flagged by Facebook users, while the company's human reviewers and computer algorithms identified 38 percent. By contrast, Facebook's automated tools detected 86 percent to 99.5 percent of the violations in the categories of graphic violence, nudity, sexual activity and terrorist propaganda.
Facebook traced the disparity to the difficulty computer programs have understanding the nuances of human language, including the context and tone of the sentences being written.
Disclaimer: No Business Standard Journalist was involved in creation of this content