Meta Set To Label AI Generated Images

22 Feb 2024 UK SIC

Meta has announced plans to label AI-generated images on Facebook, Instagram, and Threads in an aim to enhance transparency on their platforms.

The company is part of a cohort of industry partners collaborating to establish common technical standards for identifying AI-created content, covering AI images, audio, and videos. Partners currently include Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock.

Concerns have been rising around the increasing accuracy of AI to generate realistic and believable content, with deepfake creations becoming increasingly easy to develop and distribute across social media. These latest changes aim to address concerns around AI advancement by helping users distinguish between AI and human-generated content easily.

How does AI labelling work?

AI-labelling will involve embedding a ‘signal’ within AI-generated images, resulting in a marker within AI-generated files that is expected to be hard to tamper with or remove. Currently, this marker will be applied to all of Meta’s AI tools, alongside content produced through other AI platforms such as Google, OpenAI, Microsoft, Adobe, Midjourney, and Shutterstock once they adopt similar labelling practices.

While these measures mark progress in identifying AI content, there’s still much to learn about the potential impact of AI. SWGfL’s AI Topic Hub offers insights and free lesson plans on AI. Additionally, Ken Corish has recently discussed the implications of AI in education on the SWGfL Interface Podcast.

In April, SWGfL and the UK Safer Internet Centre will be hosting an AI in Education Online Safety Clinic to help educators shape strategies considering AI. Professionals can register for the event and find out more here.

Share your feedback:

This field is for validation purposes and should be left unchanged.