Images of child sexual abuse created by artificial intelligence are becoming “significantly more realistic”, according to an online safety watchdog.
The Internet Watch Foundation (IWF) said advances in AI are being reflected in illegal content created and consumed by paedophiles, saying: “In 2024, the quality of AI-generated videos improved exponentially, and all types of AI imagery assessed appeared significantly more realistic as the technology developed.”
The IWF revealed in its annual report that it received 245 reports of AI-generated child sexual abuse imagery that broke UK law in 2024 – an increase of 380% on the 51 seen in 2023. The reports equated to 7,644 images and a small number of videos, reflecting the fact that one URL can contain multiple examples of illegal material.
The largest proportion of those images was “category A” material, the term for the most extreme type of child sexual abuse content that includes penetrative sexual activity or sadism. This accounted for 39% of the actionable AI material seen by the IWF.
The government announced in February it will become illegal to possess, create or distribute AI tools designed to generate child sexual abuse material, closing a legal loophole that had alarmed police and online safety campaigners. It will also become illegal for anyone to possess manuals that teach people how to use AI tools to either make abusive imagery or to help them abuse children.
The IWF, which operates a hotline in the UK but has a global remit, said the AI-generated imagery is increasingly appearing on the open internet and not just on the “dark web” – an area of the internet accessed by specialised browsers. It said the most convincing AI-generated material can be indistinguishable from real images and videos, even for trained IWF analysts.
The watchdog’s annual report also announced record levels of webpages hosting child sexual abuse imagery in 2024. The IWF said there were 291,273 reports of child sexual abuse imagery last year, an increase of 6% on 2023. The majority of victims in the reports were girls.
The IWF also announced it was making a new safety tool available to smaller websites for free, to help them spot and prevent the spread of abuse material on their platforms.
The tool, called Image Intercept, can detect and block images that appear in an IWF database containing 2.8m images that have been digitally marked as criminal imagery. The watchdog said it would help smaller platforms comply with the newly introduced Online Safety Act, which contains provisions on protecting children and tackling illegal content such as child sexual abuse material.
Derek Ray-Hill, the interim chief executive of the IWF, said making the tool freely available was a “major moment in online safety”.
The technology secretary, Peter Kyle, said the rise in AI-generated abuse and sextortion – where children are blackmailed over the sending of intimate images – underlined how “threats to young people online are constantly evolving”. He said the new image intercept tool was a “powerful example of how innovation can be part of the solution in making online spaces safer for children”.