Child safety must be a priority for global leaders and AI developers

07 Mar 2025 UK SIC

Following the Paris AI Summit, the Internet Watch Foundation’s (IWF) Head of Policy and Public Affairs Hannah Swirsky reflects on the harmful effects of AI child sexual abuse content and the need to bring safety to the centre of the global discussions and key legislation on this matter.  

The Paris AI Summit on 10-11 February marked a departure from the 2023 AI Summit in the UK, where the emerging thread of AI child sexual abuse imagery was discussed by the IWF and the then Home Secretary. The event concluded with the signing of a statement by organisations such as TikTok, Snap, Stability AI and the governments of the US and Australia, showing their commitment for AI to be developed in “a way that is for the common good of protecting children from sexual abuse across all nations”. 

Two years later, the IWF, a partner in the UK Safer Internet Centre, witnesses with concern the lack of children’s safety considerations in the discussions around this emerging technology and the conscious decision to shift the debate towards AI economic benefits.

This contrasts with the rapid proliferation and improved quality of the content that IWF analysts have seen since they started monitoring this content in 2023. As the leading UK force fighting against child sexual abuse material (CSAM) on the global internet, the IWF warns about the consequences of ignoring children’s safety and encourages a renewed discussion to ensure safety is embedded in AI and necessary to support growth and innovation.

Share your feedback:

This field is for validation purposes and should be left unchanged.