Schools given advice on image safety to keep ahead of threat from AI blackmailers
Schools have been briefed with vital information to safeguard pupils online in response to an emerging AI threat from blackmailers.
The essential guidance lays down the risks of sharing images and videos of pupils on websites and social media platforms, giving key advice for managing children’s image security in education settings.
Driven by the UK Online Harms Early Warning Working Group (EWWG), image security guidance has now been updated following reports of criminals targeting schools and using AI tools to create child sexual abuse material of pupils.
Adult staff at schools and pupils over 18 are also vulnerable to being targeted in this way.
In a recent incident, UK police called on the Internet Watch Foundation (IWF), a member of the EWWG, to help block the distribution of sexual images of children that had been sourced from a secondary school’s website and altered using AI tools. The blackmailers had sent the criminal imagery to the school with the threat to share more widely if the school did not pay them money.
On this occasion, IWF analysts assessed around 150 confirmed images of child sexual abuse. These were then ‘hashed’ or given a digital fingerprint and added to a blocking list which tech companies can use to stop the imagery from being viewed or downloaded.
While incidents of this type do not yet appear to be widespread, the concern from police, education professionals and child safety organisations in the EWWG is that it is ‘only a matter of time’ before more schools will be targeted by criminals in this way.
Research also shows that potentially harmful content made using AI is a major source of concern for many young people. Some 60% of eight to 17-year-olds are worried that AI may be used to create inappropriate or sexual content of themselves or their peers. As well as young people themselves, 65% of parents and carers are worried about someone using AI to create images of their child.
EWWG chair Will Gardner said: “It is incredibly sad to think that pictures of children taking part in school activities, showing their rightful and positive place in their communities, have now become a target for cynical scammers willing to exploit children to make money.
“Unfortunately, we are increasingly seeing advances in new technologies being exploited by criminals and children’s images online are vulnerable to being manipulated and misused.
“It is vital that schools understand the risks around image security across their online platforms and that they are given the necessary information and guidance on how to best manage the photos and videos of students.
“The safeguarding of students’ imagery should be a high priority for education settings and, where schools might once have been proud of what they could display on their website, now they can take pride in ensuring they have taken all possible steps to best protect their students and staff online.”
IWF Hotline Manager Tamsin McNally said: “These blackmail threats to schools feel very similar to the cases of financially motivated sexual extortion of children that we see every day in the IWF Hotline.
“However, owing to the rapid improvement in AI technology, schools and hundreds of children’s images can now be used for blackmail by criminals.
“We feel it is only a matter of time before more schools are targeted in this manner, and our experience is that girls are usually the primary victims of image abuse.
“Thankfully, measures such as the IWF and Childline’s Report Remove tool for under-18s, one of many highlighted in this new guidance, can be used to try and take back control of this imagery.”
The image security advice, which has now been made available to education settings and other organisations working with children across the UK, offers a checklist of actions for schools to help staff recognise and respond to incidents of image-based abuse and ensure that students’ safety, dignity and emotional wellbeing are prioritised.
Schools are also warned to not engage with anyone attempting to blackmail their setting and to contact police immediately. In certain circumstances, the paying of ransom demands can be illegal.
UK law enforcement, working with international partners, regulators and industry are taking action every day to tackle AI child sexual abuse material and online blackmail.
Jess Phillips, Minister for Safeguarding and Violence Against Women and Girls, said: “This is a deeply worrying emerging threat, with criminals using AI to exploit children and turn innocent images into tools of abuse and blackmail.
“I want to thank the Internet Watch Foundation and the UK Online Harms Early Warning Working Group for their vital work in tackling this harm and supporting schools to stay ahead of these risks. It is crucial that children and those who care for them have the information and confidence to stay safe online.
“Our new laws will soon ban AI tools designed to generate child sexual abuse material and crack down on those sharing guides on how to create it. We will not hesitate to go further if necessary and make sure our laws stay up to date with the latest threats.”
Notes to editors:
The UK Online Harms Early Warning Working Group is designed to strengthen early information-sharing between helplines, hotlines, government, law enforcement and reporting bodies to better identify issues or new trends relating to online harms at an early stage.
The guidance has been drafted and supported by the following UK Online Harms Early Warning Working Group members:
- Childnet
- Education Scotland
- Embrace (Child Victims of Crime)
- IWF
- Lucy Faithfull Foundation
- Marie Collins Foundation
- National Crime Agency – CEOP Education
- NSPCC
- Safeguarding Board for Northern Ireland
- Samaritans
- SWGfL
- The Children’s Society
- Welsh Government
Contacts:
IWF: Cat McShane, catherine.mcshane@iwf.org.uk, +44 (0) 7572 783227
Childnet: press@childnet.com