Business Standard

AI advances fuel rise in child abuse deepfakes, UK safety watchdog warns

The UK-based safety watchdog have observed that most videos found on a dark web forum used by paedophiles are partial deepfakes

Less than two years since generative artificial intelligence (GenAI) came to the market, organisations are discovering that instead of automating away jobs, the technology is boosting workers’ performance. That’s one of the key findings of Freshworks

Representative Picture

Nandini Singh New Delhi

Listen to This Article

Recent advancements in artificial intelligence (AI) are being exploited by predators to create AI-generated videos of child sexual abuse, raising concerns about a potential increase in such content as technology progresses, The Guardian reports, citing a UK-based safety watchdog.

The Internet Watch Foundation (IWF) reports that most of these instances involve the manipulation of existing child sexual abuse material (CSAM) or adult pornography, where a child’s face is superimposed onto the footage. A smaller number of cases feature entirely AI-generated videos lasting about 20 seconds.

The IWF, which tracks CSAM globally, alerted that more AI-generated CSAM videos could proliferate as AI tools become more accessible and user-friendly.
 

Dan Sexton, IWF’s chief technology officer noted a worrying trend, “If AI video tools follow the same pattern as AI-generated still images, we can expect a rise in CSAM videos.” He said that future videos could be of “higher quality and realism”.

IWF analyst also observed that most videos found on a dark web forum used by paedophiles are partial deepfakes. These involve using freely available AI models to superimpose a child’s face, including images of known CSAM victims, onto existing CSAM videos or adult pornography. The IWF identified nine such videos.

While fewer in number, some wholly AI-generated videos are of a more basic quality. Analysts warn, however, that this could represent the ‘worst’ of fully synthetic video production for now.

The IWF highlighted that AI-generated CSAM images have become more photorealistic this year compared to 2023 when they first detected such content. A snapshot study of a single dark web forum revealed 12,000 new AI-generated images posted over a month-long period. The IWF found that nine out of ten of these images were so realistic they could be prosecuted under UK laws governing real CSAM.

The organisation, which operates a public hotline for reporting abuse, found examples of offenders selling AI-generated CSAM images online in place of non-AI-generated CSAM.
 
Susie Hargreaves, IWF’s chief executive, issued a stark warning: “Without proper controls, generative AI tools provide a playground for online predators to realise their most perverse and sickening fantasies. Even now, the IWF is starting to see more of this type of material being shared and sold on commercial child sexual abuse websites on the internet.”

The IWF is advocating for legal reforms to criminalise the creation of guides for generating AI-made CSAM and the development of ‘fine-tuned’ AI models capable of producing such material.
 
In a related move, Baroness Kidron, a crossbench peer and child safety campaigner, proposed an amendment to the data protection and digital information bill this year to criminalise the creation and distribution of these AI models. However, the bill was shelved after Rishi Sunak called for a general election in May.

 

Don't miss the most important news and views of the day. Get them on our Telegram channel

First Published: Jul 22 2024 | 5:40 PM IST

Explore News