Don’t miss the latest developments in business and finance.

A technology that could stop Facebook Live being used to stream murders

This was the not the first time Facebook has been used to live stream violent behaviour

The Conversation logo
Honglei Li | The Conversation
Last Updated : May 19 2017 | 11:12 AM IST

It took 24 hours before the video of a man murdering his baby daughter was removed from Facebook. On April 24, 2017, the father from Thailand had streamed the killing of his 11-month-old baby girl using the social network’s Live video service before killing himself. The two resulting video clips were streamed hundreds of thousands of times before they were finally removed.

This was the not the first time Facebook has been used to live stream violent behaviour. Earlier in April, the site was used to stream a murder in Cleveland and a suicide in Alabama in the US.

As a result, Facebook has been criticised for not responding quickly enough to the use of its live streaming service in this way. The company has responded by saying it already has plans to hire 3,000 people to identify any videos containing criminal and violent behaviour.

But with 1.86 billion users, Facebook is far too big for this to be enough. What Facebook is facing is not only a management problem but a technology challenge. Instead, the social network needs to roll out more software that can detect videos with violent content automatically.

Traditionally, social networks have relied on users to identify criminal activities through reporting and complaining systems. If anyone feels threatened or identifies any abnormal activities, they can report them to the site or, if necessary, directly to the police. In Facebook’s case, if anyone complains about any violent content, Facebook will investigate it and decide whether it needs to be removed.

But given the amount of content posted every day and the speed at which it spreads, even thousands of investigators are unlikely to be enough to deal with violent videos rapidly. That’s why it took nearly 24 hours for the murder video to be removed, even though it was reported right after the live stream started.

Recent developments in artificial intelligence technology could provide a solution through what is known as “text mining”, “image mining” and “video mining” technology. This uses machine learning algorithms to try to automatically detect any sensitive words or behaviour in digital content. Facebook could set up a system that uses this technology to identify content as potentially violent and prevent it from spreading through the network. This would provide more time for users to report the content and for Facebook’s staff to check whether it needs to be removed.

To be effective, the algorithms need to incorporate ideas from psychology and linguistics so that they can categorise different types of violent content. For example, the act of killing someone is relatively easy to designate as violent. But many other potentially violent acts involve psychological damage rather than bodily harm.

The algorithms would have to automatically cluster or classify messages into different levels based on their linguistic features, attaching a higher score to content with a greater likelihood of violent behaviour. Facebook staff could then use this system to more efficiently monitor content.

This may also allow staff to prevent violent content appearing before it is uploaded. If the system alerts staff of low-level violent speech or messages, they could step in to prevent further content being uploaded that represents actual physical violence or more severe messages.

If details were then passed to the police, this system could even be able to prevent the crimes occurring in the first place. For example, a government report on the public murder of British soldier Lee Rigby suggests that Facebook could have done more to stop the killers, who had discussed “killing a soldier” on the site.

New problems

This kind of machine learning algorithm has been well developed and used to report of car accidents and congestion in the CCTV footage used by transport authorities. But it’s yet to be developed for live-streamed online videos. The difficulty is that livestream video content is much harder for algorithms to analyse than videos of moving cars. But the urgent demand for content monitoring and management software should drive advances in this area. Facebook might even act as the leader in the field.

However, this might lead to content being monitored and even censored before it has been published. This would raise the issue of what rights Facebook has over content posted to its site, adding to existing controversy over the way most social networks have the right to use content in almost any way they like.

It would also conflict with the conventional ethos of social media being a way for users to publish anything they wish (even if it may later be removed), which has been a part of the internet since its birth. It would also mean Facebook accepting greater responsibility for the content on its site than it has so far been prepared to acknowledge, making it more like a traditional publisher than a platform. And this could create a whole new set of problems.

Honglei Li, Senior Lecturer in Computer and Information Sciences, Northumbria University, Newcastle

This article was originally published on The Conversation. Read the original article.