When users upload something offensive to disturb people, it normally has to be seen and flagged by at least one person.
Offensive posts include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.
For example, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone's wall, a group, event or the feed, 'Tech Crunch' reported.
Now, AI is helping Facebook to unlock active moderation at scale by having computers scan every image uploaded before anyone sees it.
Also Read
"Today we have more offensive photos being reported by AI algorithms than by people," said Joaquin Candela, Facebook's Director of Engineering for Applied Machine Learning.
As many as 25 per cent of engineers now regularly use its internal AI platform to build features and do business, Facebook said.
This AI helps rank news feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 per cent.