Facebook has said its Artificial Intelligence (AI) and Machine Learning (ML) technologies have helped remove 8.7 million pieces of child sexual images from its platform in the past three months.
Almost of these images (99 per cent) were removed before anyone reported them, Facebook said on Wednesday.
Facebook's "Community Standards" ban child exploitation and to avoid the potential for abuse, it takes action on nonsexual content as well, like seemingly benign photos of children in the bath.
"We also remove accounts that promote this type of content," Facebook's Global Head of Safety Antigone Davis said in a statement.
Facebook also has specially trained teams with backgrounds in law enforcement, online safety, analytics, and forensic investigations, which review potentially harmful content, Davis said.
It also extensively uses technology to identify child exploitative content on its platform and also to find accounts that engage in potentially inappropriate interactions with children on Facebook so that it can remove them and prevent additional harm.
More From This Section
"In addition to photo-matching technology, we're using Artificial Intelligence and Machine Learning to proactively detect child nudity and previously unknown child exploitative content when it's uploaded," Davis said.
"We also collaborate with other safety experts, NGOs and companies to disrupt and prevent the sexual exploitation of children across online technologies," Davis said.
Facebook said it would join Microsoft and other industry partners next month to begin building tools for smaller companies to prevent the "grooming" of children for sexual exploitation.
--IANS
gb/vm