Facebook, Twitter, Microsoft and Google-owned YouTube announced yesterday a drive to stop the proliferation of videos and messages showing beheadings, executions and other gruesome content, posted by the likes of the Islamic State group or Al-Qaeda.
The move comes as social media giants face increasing scrutiny over their role in the explosion of so-called "fake news" - which is believed to have influenced the US election - as well as online bullying and hate speech.
But with the rampant use of the networks by jihadists to plan, recruit and depict violent attacks, the tech platforms were forced to take a stronger stand.
"There is no place for content that promotes terrorism on our hosted consumer services," they said in a joint statement.
More From This Section
James Lewis, a senior fellow who follows technology and security issues at the Center for Strategic and International Studies, believes social media have reached a turning point, and can no longer claim to be "neutral platforms."
"Terrorist content is only the start," he said. "Now they have to figure what to do about hate speech, racism and bullying."
Yesterday's joint statement did not indicate what type of technology would be used in the new initiative, except to say it would be based on a shared industry database of "hashes" or digital fingerprints that identify jihadist content.
Some critics have suggested that such content could be curbed by using a template could be a program that is already employed by online firms to block child pornography.Companies to do any centralized censorship could lead to a lot of negative consequences," Calabrese said.
Calabrese said that "there is no guarantee the program will work" in curbing the spread of violence and extremist content.
He said that to ensure the program is not abused, "companies should not take any censorship requests from governments," and there should be an appeal mechanism "for correcting any mistakes.