Technology firms including Facebook, Instagram and Twitter face “substantial” fines or a UK ban under a new law if they don’t act swiftly enough to remove content that encourages terrorism and child sexual exploitation and abuse.
The companies’ directors could also be held personally liable if illegal content is not taken down within a short and pre-determined time-frame, the Home Office said. The exact level of fines will be examined during a 12 week consultation following the legislation’s launch on Monday. The spread of fake news and interference in elections will also be tackled.
The need for a new law over a voluntary code has been highlighted by the terrorist attack in New Zealand last month in which 50 Muslims were killed while footage was live-streamed online. In the UK, the case of 14-year-old Molly Russell has also focused minds. According to her father, the teenager killed herself in 2017 after viewing self-harm and suicide content online.
“Put simply, the tech companies have not done enough to protect their users and stop this shocking content from appearing in the first place,” Home Secretary Sajid Javid said in a statement released by his office. “Our new proposals will protect UK citizens and ensure tech firms will no longer be able to ignore their responsibilities.”
Search engines alongside online messaging services and file hosting sites will also come under the remit of a new regulator. Annual reports on what companies have done to remove and block harmful content will be required and streaming sites aimed at children, such as Youtube Kids, will be required to block harmful content such as violent imagery or porn.
The move comes after Facebook Chief Executive Officer Mark Zuckerberg called March 30 for “a more active role for governments and regulators.”
To read the full story, Subscribe Now at just Rs 249 a month