Children must understand risk as UK schools say pupils abusing AI to make sexual imagery of other children

27 Nov 2023 UK SIC

‘Urgent action’ is needed to prevent this technology being abused further in schools.

Initial reports show schoolchildren in the UK are now using AI to generate indecent images of other children, with experts warning urgent action is needed to help children understand the risks of making this sort of imagery.

The UK Safer Internet Centre (UKSIC) says it has now begun receiving small numbers of reports from schools that children are making, and attempting to make, indecent images of other children using AI image generators.

Teachers are warning that pupils are using the technology to create imagery which legally constitutes child sexual abuse material.

Children may be making this imagery out of curiosity, sexual exploration, or for a range of other reasons, but images can quickly get out of hand and children risk “losing control” of the material, which can then circulate on the open web.

Parents and teachers are urged to help children understand the risks associated with making AI generated imagery of this sort.

The UK Safer Internet Centre, a child protection organisation made up of the Internet Watch Foundation (IWF), SWGfL, and Childnet, says this imagery can have many harmful effects on children – and warns it could also be used to abuse or blackmail children.

The UKSIC says schools must ensure their filtering and monitoring systems can effectively block illegal material across their school devices to combat this emerging threat.

Imagery of child sexual abuse is illegal in the UK, whether AI generated or photographic – with even cartoon or less realistic depictions still being illegal to make, possess, and distribute.

David Wright, Director at UKSIC and CEO at SWGfL, said children may be exploring the potential of AI image generators without fully appreciating the harm they may be causing, or the risks of the imagery being shared elsewhere online.
 
He said: “We are now getting reports from schools of children using this technology to make, and attempt to make, indecent images of other children.
 
“This technology has enormous potential for good, but the reports we are seeing should not come as a surprise. Young people are not always aware of the seriousness of what they are doing, yet these types of harmful behaviours should be anticipated when new technologies, like AI generators, become more accessible to the public. 
 
“We clearly saw how prevalent sexual harassment and online sexual abuse was from the Ofsted review in 2021, and this was a time before Generative AI technologies.
 
“Although the case numbers are currently small, we are in the foothills and need to see steps being taken now, before schools become overwhelmed and the problem grows. An increase in criminal content being made in schools is something we never want to see, and interventions must be made urgently to prevent this from spreading further.
 
“We encourage schools to review their filtering and monitoring systems and reach out for support when dealing with incidents and safeguarding matters.” 

Victoria Green, CEO of the Marie Collins Foundation, a charity which works with children who have been affected by technology-assisted sexual abuse, and their families, said the children using these tools may not always understand the harms. 
 
She said: “Whatever the intent, the impact of AI generated imagery on the person depicted can be lifelong. Whatever the motivation, we must remember that, behind every AI image, is a real child who may not know where to access help. 
“The imagery may not have been created by children to cause harm but, once shared, this material could get into the wrong hands and end up on dedicated abuse sites. There is a real risk that the images could be further used by sex offenders to shame and silence victims.”

Schools can seek support from the UK Safer Internet Centre via their Professionals Online Safety Helpline for further guidance on safeguarding.

A spokesperson from the Professionals Online Safety Helpline said: ‘’Professionals need to be aware that this is a concern that has potential to grow if not managed correctly.
 
“For many schools, this will be an unprecedented safeguarding issue that requires immediate action to protect those involved.
 
“We encourage schools to contact the Professionals Online Safety Helpline if they need support in dealing with cases of this nature.
 
“While we are not able to receive or report any AI generated child sexual abuse content, we can support you through the appropriate safeguarding routes and suggest further actions for future prevention. If you do need to report material, you can contact the Internet Watch Foundation.’’   

In October, the IWF warned that AI-generated images of child sexual abuse are now so realistic, many would be indistinguishable from real imagery – even to trained analysts.

The IWF has discovered thousands of AI generated images of child sexual abuse online, and warned more needs to be done to prevent them being produced at scale.

Among the recommendations, the report called for international alignment on the laws governing how this content is treated, a review of UK laws to ensure they’re fit for purpose to tackle AI-generated child sexual abuse, and to ensure there is more regulatory oversight of AI models including risk mitigation strategies to prevent the technology’s abuse.

Emma Hardy, UK Safer Internet Centre Director, and Communications Director at the IWF, said: “The potential for abuse of this technology is terrifying. This is not some theoretical risk. It’s something we are seeing here and now.
 
“Generative AI has matured at such a rate. The quality of the images that we’re seeing is comparable to the professional photos taken annually of children in schools up and down the country.
 
“The photo-realistic nature of AI-generated imagery of children means sometimes, the children we see are recognisable as victims of previous sexual abuse.
 
“Sometimes they are real children, or even celebrities, who have never suffered sexual abuse but whose images and likeness is manipulated. Whatever the reason for making it, children must be warned it can spread across the internet and end up being seen by complete strangers and sexual predators.
 
“It could be used to blackmail or groom children – and serves to normalise sexual violence against children. That schoolchildren are experimenting with it is a wakeup call.
 
“We must see measures put in place to prevent the abuse of this technology. Right now, unchecked, unregulated AI is making children less safe.” 

Share your feedback:

This field is for validation purposes and should be left unchanged.