Don’t miss the latest developments in business and finance.

Google, Facebook, Twitter face terror law in EU crackdown on internet hate

Companies could be whacked with fines if they fail to comply

Representative Image
.
Bloomberg
Last Updated : Jul 29 2018 | 2:39 PM IST
Google, Twitter Inc. and Facebook Inc. have taken significant steps to expunge Islamic State propaganda and other terrorist content from their platforms.

But taking no chances, the European Union is set to propose a tough new law anyway -- threatening internet platforms, big and small, with fines if they fail to take down terrorist material, according to people familiar with the proposals that could be unveiled as soon as September.

While the details of the measures are still being thrashed out, they would likely be based on the EU guidance from earlier this year, said the people, who asked not to be identified because the details aren’t yet public.

The EU in March issued guidelines giving internet companies an hour from notification by authorities to wipe material such as gruesome beheading videos and other terror content from their services, or face possible legislation if they fail to do so.

"It’s true that the positive role that some of the big companies are playing today is incomparable to the situation three years ago,” said Gilles de Kerchove, the EU’s anti-terrorism czar. “But so is the scale, breadth and complexity of the problem." An additional step in the response is "essential," he said, given the diverse online aspects of the recent attacks in Europe.

Big Strides
 
Large tech firms say they’ve been making big strides in the fight to wipe terror propaganda, videos and other messages from their sites, partly thanks to automated tools that in some cases can detect such content before users even see it.

"We haven’t had any major incidents to rush legislation," said Siada El Ramly, head of Edima, a European trade association representing online platforms including Google, Facebook and Twitter.

Online services take the fight against terrorist content extremely seriously, said Maud Sacquet, senior manager for public policy at the Computer & Communications Industry Association, an industry group that includes Google and Facebook as members.

“This proposal seems rushed and its publication in the fall much too early to take into account the outcomes of already ongoing EU initiatives,” she said.

A commission spokeswoman declined to provide more details on the proposals.

Violent Extremism
 
In April, Google said more than half of the YouTube videos it removes for violent extremism have fewer than 10 views. Facebook said the same month that in the first quarter of this year it either removed, or in a small amount of cases flagged for informational purposes, a total of 1.9 million pieces of Islamic State and al-Qaeda content. Twitter says it has suspended a total of more than one million accounts, with 74 percent of accounts suspended before their first tweet.

Some European member states have been vocal about the dangers of online radicalization and the spread of terror propaganda, particularly in the wake of deadly terror attacks in some European capitals in recent years. In a speech in April, French President Emmanuel Macron called on internet giants to speed up their process to remove terror content.

Germany didn’t wait around and last year pushed ahead with new rules that threaten social networks with fines of as much as 50 million euros ($58 million) if they fail to give users the option to complain about hate speech and fake news or refuse to remove illegal content.

Gaming the Systems

For companies, detecting harmful content is a constant battle as some groups continue to try to game their systems to spread their messages online as widely as possible. One tool that’s helped: a shared industry database, among Google, Twitter, Facebook and other companies, of known terrorist videos and images so they can see what each other’s platforms have taken down and remove the same content on their own websites.

Europol has said the cooperation with the big internet platforms on taking down terror content that they flag is "excellent." The agency works with more than 70 internet and media companies and on average they remove more than 90 percent of the content that’s flagged to them within two to three hours.

While big platforms have been able to speed up their removals, any legislation could hit smaller companies with fewer tools disproportionately harder. And excluding them from the scope of the law could make them more attractive for terrorist groups and their fans to carry their communications over to those platforms.

No Clarity
 
For Edima, the concern is the threat of fines could force companies to err on the side of over-removal if there isn’t sufficient clarity around when time-frames for removal begin or what groups are considered terror organizations, for instance.

"We’re concerned that if we don’t have clarity” in the new rules “that platforms could be forced to become the judge and jury as to how to classify that content," El Ramly said.

Still, some critics say the big internet giants need to do more. The non-profit organization Counter Extremism Project, which aims to combat the threat of extremist ideologies, said in April that gaps remained in Facebook and others companies’ approaches to combating extremism.

The group said Facebook has only emphasized the removal of Islamic State and al-Qaeda content and has provided insufficient transparency about its progress in removing content from other extremist groups. Facebook didn’t respond to requests for comment. Google and Twitter didn’t comment on the EU’s legislation.
Next Story