More people exposed to AI child sexual abuse images and videos on the clear web – warns the Internet Watch Foundation
In the past six months, the Internet Watch Foundation (IWF), a partner in the UK Safer Internet Centre, has seen a 6% increase in the number of reports confirmed as containing criminal AI child sexual abuse material, in comparison with the previous 12 months.
Most of the reports have been made by members of the public (78%) who were accidentally exposed to the content on sites such as forums or AI galleries. Almost all (99%) of the content was hosted on the open web.
The expert analysts at the IWF warn of the harm this content causes to victims, some of whose images and videos of sexual abuse have been used to generate new and more extreme AI content, and the distress this content causes to members of the public who are increasingly being exposed to criminal material online.
Derek Ray-Hill, Interim Chief Executive Officer at the IWF, said: “People can be under no illusion that AI generated child sexual abuse material causes horrific harm, not only to those who might see it but to those survivors who are repeatedly victimised every time images and videos of their abuse are mercilessly exploited for the twisted enjoyment of predators online.
“To create the level of sophistication seen in the AI imagery, the software used has also had to be trained on existing sexual abuse images and videos of real child victims shared and distributed on the internet.
“The protection of children and the prevention of AI abuse imagery must be prioritised by legislators and the tech industry above any thought of profit. Recent months show that this problem is not going away and is in fact getting worse. We urgently need to bring laws up to speed for the digital age, and see tangible measures being put in place that address potential risks.”
Read the original article here.